Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
2
In a previous life, I worked for a location-based entertainment company, part of a huge team of people developing a location for Las Vegas, Nevada. It was COVID, a rough time for location-based anything, and things were delayed more than usual. Coworkers paid a lot of attention to another upcoming Las Vegas attraction, one with a vastly larger budget but still struggling to make schedule: the MSG (Madison Square Garden) Sphere. I will set aside jokes about it being a square sphere, but they were perhaps one of the reasons that it underwent a pre-launch rebranding to merely the Sphere. If you are not familiar, the Sphere is a theater and venue in Las Vegas. While it's know mostly for the video display on the outside, that's just marketing for the inside: a digital dome theater, with seating at a roughly 45 degree stadium layout facing a near hemisphere of video displays. It is a "near" hemisphere because the lower section is truncated to allow a flat floor, which serves as a stage for...
yesterday

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from computers are bad

2025-05-27 the first smart homes

Sometimes I think I should pivot my career to home automation critic, because I have many opinions on the state of the home automation industry---and they're pretty much all critical. Virtually every time I bring up home automation, someone says something about the superiority of the light switch. Controlling lights is one of the most obvious applications of home automation, and there is a roughly century long history of developments in light control---yet, paradoxically, it is an area where consumer home automation continues to struggle. An analysis of how and why billion-dollar tech companies fail to master the simple toggling of lights in response to human input will have to wait for a future article, because I will have a hard time writing one without descending into incoherent sobbing about the principles of scene control and the interests of capital. Instead, I want to just dip a toe into the troubled waters of "smart lighting" by looking at one of its earliest precedents: low-voltage lighting control. A source I generally trust, the venerable "old internet" website Inspectapedia, says that low-voltage lighting control systems date back to about 1946. The earliest conclusive evidence I can find of these systems is a newspaper ad from 1948, but let's be honest, it's a holiday and I'm only making a half effort on the research. In any case, the post-war timing is not a coincidence. The late 1940s were a period of both rapid (sub)urban expansion and high copper prices, and the original impetus for relay systems seems to have been the confluence of these two. But let's step back and explain what a relay or low-voltage lighting control system is. First, I am not referring to "low voltage lighting" meaning lights that run on 12 or 24 volts DC or AC, as was common in landscape lighting and is increasingly common today for integrated LED lighting. Low-voltage lighting control systems are used for conventional 120VAC lights. In the most traditional construction, e.g. in the 1940s, lights would be served by a "hot" wire that passed through a wall box containing a switch. In many cases the neutral (likely shared with other fixtures) went directly from the light back to the panel, bypassing the switch... running both the hot and neutral through the switch box did not become conventional until fairly recently, to the chagrin of anyone installing switches that require a neutral for their own power, like timers or "smart" switches. The problem with this is that it lengthens the wiring runs. If you have a ceiling fixture with two different switches in a three-way arrangement, say in a hallway in a larger house, you could be adding nearly 100' in additional wire to get the hot to the switches and the runner between them. The cost of that wiring, in the mid-century, was quite substantial. Considering how difficult it is to find an employee to unlock the Romex cage at Lowes these days, I'm not sure that's changed that much. There are different ways of dealing with this. In the UK, the "ring main" served in part to reduce the gauge (and thus cost) of outlet wiring, but we never picked up that particular eccentricity in the US (for good reason). In commercial buildings, it's not unusual for lighting to run on 240v for similar reasons, but 240v is discouraged in US residential wiring. Besides, the mid-century was an age of optimism and ambition in electrical technology, the days of Total Electric Living. Perhaps the technology of the relay, refined by so many innovations of WWII, could offer a solution. Switch wiring also had to run through wall cavities, an irritating requirement in single-floor houses where much of the lighting wiring could be contained to the attic. The wiring of four-way and other multi-switch arrangements could become complex and require a lot more wall runs, discouraging builders providing switches in the most convenient places. What if relays also made multiple switches significantly easier to install and relocate? You probably get the idea. In a typical low-voltage lighting control system, a transformer provides a low voltage like 24VAC, much the same as used by doorbells. The light switches simply toggle the 24VAC control power to the coils of relays. Some (generally older) systems powered the relay continuously, but most used latching relays. In this case, all light switches are momentary, with an "on" side and an "off" side. This could be a paddle that you push up or down (much like a conventional light switch), a bar that you push the left or right sides of, or a pair of two push buttons. In most installations, all of the relays were installed together in a single enclosure, usually in the attic where the high-voltage wiring to the actual lights would be fairly short. The 24VAC cabling to the switches was much smaller gauge, and depending on the jurisdiction might not require any sort of license to install. Many systems had enclosures with separate high voltage and low voltage components, or mounted the relays on the outside of an enclosure such that the high voltage wiring was inside and low voltage outside. Both arrangements helped to meet code requirements for isolating high and low voltage systems and provided a margin of safety in the low voltage wiring. That provided additional cost savings as well; low voltage wiring was usually installed without any kind of conduit or sheathed cable. By 1950, relay lighting controls were making common appearances in real estate listings. A feature piece on the "Melody House," a builder's model home, in the Tacoma News Tribune reads thus: Newest features in the house are the low voltage touch plate and relay system lighting controls, with wide plates instead of snap buttons---operated like the stops of a pipe organ, with the merest flick of a finger. The comparison to a pipe organ is interesting, first in its assumption that many readers were familiar with typical organ stops. Pipe organs were, increasingly, one of the technological marvels of the era: while the concept of the pipe organ is very old, this same era saw electrical control systems (replete with relays!) significantly reduce the cost and complexity of organ consoles. What's more, the tonewheel electric organ had become well-developed and started to find its way into homes. The comparison is also interesting because of its deficiencies. The Touch-Plate system described used wide bars, which you pressed the left or right side of---you could call them momentary SPDT rocker switches if you wanted. There were organs with similar rocker stops but I do not think they were common in 1950. My experience is that such rocker switch stops usually indicate a fully digital control system, where they make momentary action unobtrusive and avoid state synchronization problems. I am far from an expert on organs, though, which is why I haven't yet written about them. If you have a guess at which type of pipe organ console our journalist was familiar with, do let me know. Touch-Plate seems to have been one of the first manufacturers of these systems, although I can't say for sure that they invented them. Interestingly, Touch-Plate is still around today, but their badly broken WordPress site ("Welcome to the new touch-plate.com" despite it actually being touchplate.com) suggests they may not do much business. After a few pageloads their WordPress plugin WAF blocked me for "exceed[ing] the maximum number of page not found errors per minute for humans." This might be related to my frustration that none of the product images load. It seems that the Touch-Plate company has mostly pivoted to reselling imported LED lighting (touchplateled.com), so I suppose the controls business is withering on the vine. The 1950s saw a proliferation of relay lighting control brands, with GE introducing a particularly popular system with several generations of fixtures. Kyle Switch Plates, who sell replacement switch plates (what else?), list options for Remcon, Sierra, Bryant, Pyramid, Douglas, and Enercon systems in addition to the two brands we have met so far. As someone who pays a little too much attention to light switches, I have personally seen four of these brands, three of them still in use and one apparently abandoned in place. Now, you might be thinking that simply economizing wiring by relocating the switches does not constitute "home automation," but there are other features to consider. For one, low-voltage light control systems made it feasible to install a lot more switches. Houses originally built with them often go a little wild with the n-way switching, every room providing lightswitches at every door. But there is also the possibility of relay logic. From the same article: The necessary switches are found in every room, but in the master bedroom there is a master control panel above the bed, from where the house and yard may be flooded with instant light in case of night emergency. Such "master control panels" were a big attraction for relay lighting, and the finest homes of the 1950s and 1960s often displayed either a grid of buttons near the head of the master bed, or even better, a GE "Master Selector" with a curious system of rotary switches. On later systems, timers often served as auxiliary switches, so you could schedule exterior lights. With a creative installer, "scenes" were even possible by wiring switches to arbitrary sets of relays (this required DC or half-wave rectified control power and diodes to isolate the switches from each other). Many of these relay control systems are still in use today. While they are quite outdated in a certain sense, the design is robust and the simple components mean that it's usually not difficult to find replacement parts when something does fail. The most popular system is the one offered by GE, using their RR series relays (RR3, RR4, etc., to the modern RR9). That said, GE suggests a modernization path to their LightSweep system, which is really a 0-10v analog dimming controller that has the add-on ability to operate relays. The failure modes are mostly what you would expect: low voltage wiring can chafe and short, or the switches can become stuck. This tends to cause the lights to stick on or off, and the continuous current through the relay coil often burns it out. The fix requires finding the stuck switch or short and correcting it, and then replacing the relay. One upside of these systems that persists today is density: the low voltage switches are small, so with most systems you can fit 3 per gang. Another is that they still make N-way switching easier. There is arguably a safety benefit, considering the reduction in mains-voltage wire runs. Yet we rarely see such a thing installed in homes newer than around the '80s. I don't know that I can give a definitive explanation of the decline of relay lighting control, but reduced prices for copper wiring were probably a main factor. The relays added a failure point, which might lead to a perception of unreliability, and the declining familiarity of electricians means that installing a relay system could be expensive and frustrating today. What really interests me about relay systems is that they weren't really replaced... the idea just went away. It's not like modern homes are providing a master control panel in the bedroom using some alternative technology. I mean, some do, those with prices in the eight digits, but you'll hardly ever see it. That gets us to the tension between residential lighting and architectural lighting control systems. In higher-end commercial buildings, and in environments like conference rooms and lecture halls, there's a well established industry building digital lighting control systems. Today, DALI is a common standard for the actual lighting control, but if you look at a range of existing buildings you will find everything from completely proprietary digital distributed dimming to 0-10v analog dimming to central dimmer racks (similar to traditional theatrical lighting). Relay lighting systems were, in a way, a nascent version of residential architectural lighting control. And the architectural lighting control industry continues to evolve. If there is a modern equivalent to relay lighting, it's something like Lutron QSX. That's a proprietary digital lighting (and shade) control system, marketed for both residential and commercial use. QSX offers a wide range of attractive wall controls, tight integration to Lutron's HomeSense home automation platform, and a price tag that'll make your eyes water. Lutron has produced many generations of these systems, and you could make an argument that they trace their heritage back to the relay systems of the 1940s. But they're just priced way beyond the middle-class home. And, well, I suppose that requires an argument based on economics. Prices have gone up. Despite tract construction being a much older idea than people often realize, it seems clear that today's new construction homes have been "value engineered" to significantly lower feature and quality levels than those of the mid-century---but they're a lot bigger. There is a sort of maxim that today's home buyers don't care about anything but square footage, and if you've seen what Pulte or D. R. Horton are putting up... well, I never knew that 3,000 sq ft could come so cheap, and look it too. Modern new-construction homes just don't come with the gizmos that older ones did, especially in the '60s and '70s. Looking at the sales brochure for a new development in my own Albuquerque ("Estates at La Cuentista"), besides 21st century suburbanization (Gated Community! "East Access to Paseo del Norte" as if that's a good thing!) most of the advertised features are "big." I'm serious! If you look at the "More Innovation Built In" section, the "innovations" are a home office (more square footage), storage (more square footage), indoor and outdoor gathering spaces (to be fair, only the indoor ones are square footage), "dedicated learning areas" for kids (more square footage), and a "basement or bigger garage" for a home gym (more square footage). The only thing in the entire innovation section that I would call a "technical" feature is water filtration. You can scroll down for more details, and you get to things like "space for a movie room" and a finished basement described eight different ways. Things were different during the peak of relay lighting in the '60s. A house might only be 1,600 sq ft, but the builder would deck it out with an intercom (including multi-room audio of a primitive sort), burglar alarm, and yes, relay lighting. All of these technologies were a lot newer and people were more excited about them; I bring up Total Electric Living a lot because of an aesthetic obsession but it was a large-scale advertising and partnership campaign by the electrical industry (particularly Westinghouse) that gave builders additional cross-promotion if they included all of these bells and whistles. Remember, that was when people were watching those old videos about the "kitchen of the future." What would a 2025 "Kitchen of the Future" promotional film emphasize? An island bigger than my living room and a nook for every meal, I assume. Features like intercoms and even burglar alarms have become far less common in new construction, and even if they were present I don't think most buyers would use them. But that might seem a little odd, right, given the push towards home automation? Well, built-in home automation options have existed for longer than any of today's consumer solutions, but "built in" is a liability for a technology product. There are practical reasons, in that built-in equipment is harder to replace, but there's also a lamer commercial reason. Consumer technology companies want to sell their products like consumer technology, so they've recontextualized lighting control as "IoT" and "smart" and "AI" rather than something an electrician would hook up. While I was looking into relay lighting control systems, I ran into an interesting example. The Lutron Lu Master Lumi 5. What a name! Lutron loves naming things like this. The Lumi 5 is a 1980s era product with essentially the same features as a relay system, but architected in a much stranger way. It is, essentially, five three way switches in a box with remote controls. That means that each of the actual light switches in the house (which could also be dimmers) need mains-voltage wiring, including runner, back to the Lumi 5 "interface." Pressing a button on one of the Lutron wall panels toggles the state of the relay in the "interface" cabinet, toggling the light. But, since it's all wired as a three-way switch, toggling the physical switch at the light does the same thing. As is typical when combining n-way switches and dimming, the Lumi 5 has no control over dimmers. You can only dim a light up or down at the actual local control, the Lumi 5 can just toggle the dimmer on and off using the 3-way runner. The architecture also means that you have two fundamentally different types of wall panels in your house: local switches or dimmers wired to each light, and the Lu Master panels with their five buttons for the five circuits, along with "all on" and "all off." The Lumi 5 "interface" uses simple relay logic to implement a few more features. Five mains-voltage-level inputs can be wired to time clocks, so that you can schedule any combination(s) of the circuits to turn on and off. The manual recommends models including one with an astronomical clock for sunrise/sunset. An additional input causes all five circuits to turn on; it's suggested for connection to an auxiliary relay on a burglar alarm to turn all of the lights on should the alarm be triggered. The whole thing is strange and fascinating. It is basically a relay lighting control system, like so many before it, but using a distinctly different wiring convention. I think the main reason for the odd wiring was to accommodate dimmers, an increasingly popular option in the 1980s that relay systems could never really contend with. It doesn't have the cost advantages of relay systems at all, it will definitely be more expensive! But it adds some features over the fancy Lutron switches and dimmers you were going to install anyway. The Lu Master is the transitional stage between relay lighting systems and later architectural lighting controls, and it straddled too the end of relay light control in homes. It gives an idea of where relay light control in homes would have evolved, had the whole technology not been doomed to the niche zone of conference centers and universities. If you think about it, the Lu Master fills the most fundamental roles of home automation in lighting: control over multiple lights in a convenient place, scheduling and triggers, and an emergency function. It only lacks scenes, which I think we can excuse considering that the simple technology it uses does not allow it to adjust dimmers. And all of that with no Node-RED in sight! Maybe that conveys what most frustrates me about the "home automation" industry: it is constantly reinventing the wheel, an oligopoly of tech companies trying to drag people's homes into their "ecosystem." They do so by leveraging the buzzword of the moment, IoT to voice assistants to, I guess now AI?, to solve a basic set of problems that were pretty well solved at least as early as 1948. That's not to deny that modern home automation platforms have features that old ones don't. They are capable of incredibly sophisticated things! But realistically, most of their users want only very basic functionality: control in convenient places, basic automation, scenes. It wouldn't sting so much if all these whiz-bang general purpose computers were good at those tasks, but they aren't. For the very most basic tasks, things like turning on and off a group of lights, major tech ecosystems like HomeKit provide a user experience that is significantly worse than the model home of 1950. You could install a Lutron system, and it would solve those fundamental tasks much better... for a much higher price. But it's not like Lutron uses all that money to be an absolute technical powerhouse, a center of innovation at the cutting edge. No, even the latest Lutron products are really very simple, technically. The technical leaders here, Google, Apple, are the companies that can't figure out how to make a damn light switch. The problem with modern home automation platforms is that they are too ambitious. They are trying to apply enormously complex systems to very simple tasks, and thus contaminating the simplest of electrical systems with all the convenience and ease of a Smart TV. Sometimes that's what it feels like this whole industry is doing: adding complexity while the core decays. From automatic programming to AI coding agents, video terminals to Electron, the scope of the possible expands while the fundamentals become more and more irritating. But back to the real point, I hope you learned about some cool light switches. Check out the Kyle Switch Plates reference and you'll start seeing these buildings and homes, at least if you live in an area that built up during the era that they were common (1950s to the 1970s).

a week ago 9 votes
2025-05-11 air traffic control

Air traffic control has been in the news lately, on account of my country's declining ability to do it. Well, that's a long-term trend, resulting from decades of under-investment, severe capture by our increasingly incompetent defense-industrial complex, no small degree of management incompetence in the FAA, and long-lasting effects of Reagan crushing the PATCO strike. But that's just my opinion, you know, maybe airplanes got too woke. In any case, it's an interesting time to consider how weird parts of air traffic control are. The technical, administrative, and social aspects of ATC all seem two notches more complicated than you would expect. ATC is heavily influenced by its peculiar and often accidental development, a product of necessity that perpetually trails behind the need, and a beneficiary of hand-me-down military practices and technology. Aviation Radio In the early days of aviation, there was little need for ATC---there just weren't many planes, and technology didn't allow ground-based controllers to do much of value. There was some use of flags and signal lights to clear aircraft to land, but for the most part ATC had to wait for the development of aviation radio. The impetus for that work came mostly from the First World War. Here we have to note that the history of aviation is very closely intertwined with the history of warfare. Aviation technology has always rapidly advanced during major conflicts, and as we will see, ATC is no exception. By 1913, the US Army Signal Corps was experimenting with the use of radio to communicate with aircraft. This was pretty early in radio technology, and the aircraft radios were huge and awkward to operate, but it was also early in aviation and "huge and awkward to operate" could be similarly applied to the aircraft of the day. Even so, radio had obvious potential in aviation. The first military application for aircraft was reconnaissance. Pilots could fly past the front to find artillery positions and otherwise provide useful information, and then return with maps. Well, even better than returning with a map was providing the information in real-time, and by the end of the war medium-frequency AM radios were well developed for aircraft. Radios in aircraft lead naturally to another wartime innovation: ground control. Military personnel on the ground used radio to coordinate the schedules and routes of reconnaissance planes, and later to inform on the positions of fighters and other enemy assets. Without any real way to know where the planes were, this was all pretty primitive, but it set the basic pattern that people on the ground could keep track of aircraft and provide useful information. Post-war, civil aviation rapidly advanced. The early 1920s saw numerous commercial airlines adopting radio, mostly for business purposes like schedule coordination. Once you were in contact with someone on the ground, though, it was only logical to ask about weather and conditions. Many of our modern practices like weather briefings, flight plans, and route clearances originated as more or less formal practices within individual airlines. Air Mail The government was not left out of the action. The Post Office operated what may have been the largest commercial aviation operation in the world during the early 1920s, in the form of Air Mail. The Post Office itself did not have any aircraft; all of the flying was contracted out---initially to the Army Air Service, and later to a long list of regional airlines. Air Mail was considered a high priority by the Post Office and proved very popular with the public. When the transcontinental route began proper operation in 1920, it became possible to get a letter from New York City to San Francisco in just 33 hours by transferring it between airplanes in a nearly non-stop relay race. The Post Office's largesse in contracting the service to private operators provided not only the funding but the very motivation for much of our modern aviation industry. Air travel was not very popular at the time, being loud and uncomfortable, but the mail didn't complain. The many contract mail carriers of the 1920s grew and consolidated into what are now some of the United States' largest companies. For around a decade, the Post Office almost singlehandedly bankrolled civil aviation, and passengers were a side hustle [1]. Air mail ambition was not only of economic benefit. Air mail routes were often longer and more challenging than commercial passenger routes. Transcontinental service required regular flights through sparsely populated parts of the interior, challenging the navigation technology of the time and making rescue of downed pilots a major concern. Notably, air mail operators did far more nighttime flying than any other commercial aviation in the 1920s. The post office became the government's de facto technical leader in civil aviation. Besides the network of beacons and markers built to guide air mail between cities, the post office built 17 Air Mail Radio Stations along the transcontinental route. The Air Mail Radio Stations were the company radio system for the entire air mail enterprise, and the closest thing to a nationwide, public air traffic control service to then exist. They did not, however, provide what we would now call control. Their role was mainly to provide pilots with information (including, critically, weather reports) and to keep loose tabs on air mail flights so that a disappearance would be noticed in time to send search and rescue. In 1926, the Watres Act created the Aeronautic Branch of the Department of Commerce. The Aeronautic Branch assumed a number of responsibilities, but one of them was the maintenance of the Air Mail routes. Similarly, the Air Mail Radio Stations became Aeronautics Branch facilities, and took on the new name of Flight Service Stations. No longer just for the contract mail carriers, the Flight Service Stations made up a nationwide network of government-provided services to aviators. They were the first edifices in what we now call the National Airspace System (NAS): a complex combination of physical facilities, technologies, and operating practices that enable safe aviation. In 1935, the first en-route air traffic control center opened, a facility in Newark owned by a group of airlines. The Aeronautic Branch, since renamed the Bureau of Air Commerce, supported the airlines in developing this new concept of en-route control that used radio communications and paperwork to track which aircraft were in which airways. The rising number of commercial aircraft made in-air collisions a bigger problem, so the Newark control center was quickly followed by more facilities built on the same pattern. In 1936, the Bureau of Air Commerce took ownership of these centers, and ATC became a government function alongside the advisory and safety services provided by the flight service stations. En route center controllers worked off of position reports from pilots via radio, but needed a way to visualize and track aircraft's positions and their intended flight paths. Several techniques helped: first, airlines shared their flight planning paperwork with the control centers, establishing "flight plans" that corresponded to each aircraft in the sky. Controllers adopted a work aid called a "flight strip," a small piece of paper with the key information about an aircraft's identity and flight plan that could easily be handed between stations. By arranging the flight strips on display boards full of slots, controllers could visualize the ordering of aircraft in terms of altitude and airway. Second, each center was equipped with a large plotting table map where controllers pushed markers around to correspond to the position reports from aircraft. A small flag on each marker gave the flight number, so it could easily be correlated to a flight strip on one of the boards mounted around the plotting table. This basic concept of air traffic control, of a flight strip and a position marker, is still in use today. Radar The Second World War changed aviation more than any other event of history. Among the many advancements were two British inventions of particular significance: first, the jet engine, which would make modern passenger airliners practical. Second, the radar, and more specifically the magnetron. This was a development of such significance that the British government treated it as a secret akin to nuclear weapons; indeed, the UK effectively traded radar technology to the US in exchange for participation in US nuclear weapons research. Radar created radical new possibilities for air defense, and complimented previous air defense development in Britain. During WWI, the organization tasked with defending London from aerial attack had developed a method called "ground-controlled interception" or GCI. Under GCI, ground-based observers identify possible targets and then direct attack aircraft towards them via radio. The advent of radar made GCI tremendously more powerful, allowing a relatively small number of radar-assisted air defense centers to monitor for inbound attack and then direct defenders with real-time vectors. In the first implementation, radar stations reported contacts via telephone to "filter centers" that correlated tracks from separate radars to create a unified view of the airspace---drawn in grease pencil on a preprinted map. Filter center staff took radar and visual reports and updated the map by moving the marks. This consolidated information was then provided to air defense bases, once again by telephone. Later technical developments in the UK made the process more automated. The invention of the "plan position indicator" or PPI, the type of radar scope we are all familiar with today, made the radar far easier to operate and interpret. Radar sets that automatically swept over 360 degrees allowed each radar station to see all activity in its area, rather than just aircraft passing through a defensive line. These new capabilities eliminated the need for much of the manual work: radar stations could see attacking aircraft and defending aircraft on one PPI, and communicated directly with defenders by radio. It became routine for a radar operator to give a pilot navigation vectors by radio, based on real-time observation of the pilot's position and heading. A controller took strategic command of the airspace, effectively steering the aircraft from a top-down view. The ease and efficiency of this workflow was a significant factor in the end of the Battle of Britain, and its remarkable efficacy was noticed in the US as well. At the same time, changes were afoot in the US. WWII was tremendously disruptive to civil aviation; while aviation technology rapidly advanced due to wartime needs those same pressing demands lead to a slowdown in nonmilitary activity. A heavy volume of military logistics flights and flight training, as well as growing concerns about defending the US from an invasion, meant that ATC was still a priority. A reorganization of the Bureau of Air Commerce replaced it with the Civil Aeronautics Authority, or CAA. The CAA's role greatly expanded as it assumed responsibility for airport control towers and commissioned new en route centers. As WWII came to a close, CAA en route control centers began to adopt GCI techniques. By 1955, the name Air Route Traffic Control Center (ARTCC) had been adopted for en route centers and the first air surveillance radars were installed. In a radar-equipped ARTCC, the map where controllers pushed markers around was replaced with a large tabletop PPI built to a Navy design. The controllers still pushed markers around to track the identities of aircraft, but they moved them based on their corresponding radar "blips" instead of radio position reports. Air Defense After WWII, post-war prosperity and wartime technology like the jet engine lead to huge growth in commercial aviation. During the 1950s, radar was adopted by more and more ATC facilities (both "terminal" at airports and "en route" at ARTCCs), but there were few major changes in ATC procedure. With more and more planes in the air, tracking flight plans and their corresponding positions became labor intensive and error-prone. A particular problem was the increasing range and speed of aircraft, and corresponding longer passenger flights, that meant that many aircraft passed from the territory of one ARTCC into another. This required that controllers "hand off" the aircraft, informing the "next" ARTCC of the flight plan and position at which the aircraft would enter their airspace. In 1956, 128 people died in a mid-air collision of two commercial airliners over the Grand Canyon. In 1958, 49 people died when a military fighter struck a commercial airliner over Nevada. These were not the only such incidents in the mid-1950s, and public trust in aviation started to decline. Something had to be done. First, in 1958 the CAA gave way to the Federal Aviation Administration. This was more than just a name change: the FAA's authority was greatly increased compared tot he CAA, most notably by granting it authority over military aviation. This is a difficult topic to explain succinctly, so I will only give broad strokes. Prior to 1958, military aviation was completely distinct from civil aviation, with no coordination and often no communication at all between the two. This was, of course, a factor in the 1958 collision. Further, the 1956 collision, while it did not involve the military, did result in part from communications issues between separate distinct CAA facilities and the airline's own control facilities. After 1958, ATC was completely unified into one organization, the FAA, which assumed the work of the military controllers of the time and some of the role of the airlines. The military continues to have its own air controllers to this day, and military aircraft continue to include privileges such as (practical but not legal) exemption from transponder requirements, but military flights over the US are still beholden to the same ATC as civil flights. Some exceptions apply, void where prohibited, etc. The FAA's suddenly increased scope only made the practical challenges of ATC more difficult, and commercial aviation numbers continued to rise. As soon as the FAA was formed, it was understood that there needed to be major investments in improving the National Airspace System. While the first couple of years were dominated by the transition, the FAA's second director (Najeeb Halaby) prepared two lengthy reports examining the situation and recommending improvements. One of these, the Beacon report (also called Project Beacon), specifically addressed ATC. The Beacon report's recommendations included massive expansion of radar-based control (called "positive control" because of the controller's access to real-time feedback on aircraft movements) and new control procedures for airways and airports. Even better, for our purposes, it recommended the adoption of general-purpose computers and software to automate ATC functions. Meanwhile, the Cold War was heating up. US air defense, a minor concern in the few short years after WWII, became a higher priority than ever before. The Soviet Union had long-range aircraft capable of reaching the United States, and nuclear weapons meant that only a few such aircraft had to make it to cause massive destruction. Considering the vast size of the United States (and, considering the new unified air defense command between the United States and Canada, all of North America) made this a formidable challenge. During the 1950s, the newly minted Air Force worked closely with MIT's Lincoln Laboratory (an important center of radar research) and IBM to design a computerized, integrated, networked system for GCI. When the Air Force committed to purchasing the system, it was christened the Semi-Automated Ground Environment, or SAGE. SAGE is a critical juncture in the history of the computer and computer communications, the first system to demonstrate many parts of modern computer technology and, moreover, perhaps the first large-scale computer system of any kind. SAGE is an expansive topic that I will not take on here; I'm sure it will be the focus of a future article but it's a pretty well-known and well-covered topic. I have not so far felt like I had much new to contribute, despite it being the first item on my "list of topics" for the last five years. But one of the things I want to tell you about SAGE, that is perhaps not so well known, is that SAGE was not used for ATC. SAGE was a purely military system. It was commissioned by the Air Force, and its numerous operating facilities (called "direction centers") were located on Air Force bases along with the interceptor forces they would direct. However, there was obvious overlap between the functionality of SAGE and the needs of ATC. SAGE direction centers continuously received tracks from remote data sites using modems over leased telephone lines, and automatically correlated multiple radar tracks to a single aircraft. Once an operator entered information about an aircraft, SAGE stored that information for retrieval by other radar operators. When an aircraft with associated data passed from the territory of one direction center to another, the aircraft's position and related information were automatically transmitted to the next direction center by modem. One of the key demands of air defense is the identification of aircraft---any unknown track might be routine commercial activity, or it could be an inbound attack. The air defense command received flight plan data on commercial flights (and more broadly all flights entering North America) from the FAA and entered them into SAGE, allowing radar operators to retrieve "flight strip" data on any aircraft on their scope. Recognizing this interconnection with ATC, as soon as SAGE direction centers were being installed the Air Force started work on an upgrade called SAGE Air Traffic Integration, or SATIN. SATIN would extend SAGE to serve the ATC use-case as well, providing SAGE consoles directly in ARTCCs and enhancing SAGE to perform non-military safety functions like conflict warning and forward projection of flight plans for scheduling. Flight strips would be replaced by teletype output, and in general made less necessary by the computer's ability to filter the radar scope. Experimental trial installations were made, and the FAA participated readily in the research efforts. Enhancement of SAGE to meet ATC requirements seemed likely to meet the Beacon report's recommendations and radically improve ARTCC operations, sooner and cheaper than development of an FAA-specific system. As it happened, well, it didn't happen. SATIN became interconnected with another planned SAGE upgrade to the Super Combat Centers (SCC), deep underground combat command centers with greatly enhanced SAGE computer equipment. SATIN and SCC planners were so confident that the last three Air Defense Sectors scheduled for SAGE installation, including my own Albuquerque, were delayed under the assumption that the improved SATIN/SCC equipment should be installed instead of the soon-obsolete original system. SCC cost estimates ballooned, and the program's ambitions were reduced month by month until it was canceled entirely in 1960. Albuquerque never got a SAGE installation, and the Albuquerque air defense sector was eliminated by reorganization later in 1960 anyway. Flight Service Stations Remember those Flight Service Stations, the ones that were originally built by the Post Office? One of the oddities of ATC is that they never went away. FSS were transferred to the CAB, to the CAA, and then to the FAA. During the 1930s and 1940s many more were built, expanding coverage across much of the country. Throughout the development of ATC, the FSS remained responsible for non-control functions like weather briefing and flight plan management. Because aircraft operating under instrument flight rules must closely comply with ATC, the involvement of FSS in IFR flights is very limited, and FSS mostly serve VFR traffic. As ATC became common, the FSS gained a new and somewhat odd role: playing go-between for ATC. FSS were more numerous and often located in sparser areas between cities (while ATC facilities tended to be in cities), so especially in the mid-century, pilots were more likely to be able to reach an FSS than ATC. It was, for a time, routine for FSS to relay instructions between pilots and controllers. This is still done today, although improved communications have made the need much less common. As weather dissemination improved (another topic for a future post), FSS gained access to extensive weather conditions and forecasting information from the Weather Service. This connectivity is bidirectional; during the midcentury FSS not only received weather forecasts by teletype but transmitted pilot reports of weather conditions back to the Weather Service. Today these communications have, of course, been computerized, although the legacy teletype format doggedly persists. There has always been an odd schism between the FSS and ATC: they are operated by different departments, out of different facilities, with different functions and operating practices. In 2005, the FAA cut costs by privatizing the FSS function entirely. Flight service is now operated by Leidos, one of the largest government contractors. All FSS operations have been centralized to one facility that communicates via remote radio sites. While flight service is still available, increasing automation has made the stations far less important, and the general perception is that flight service is in its last years. Last I looked, Leidos was not hiring for flight service and the expectation was that they would never hire again, retiring the service along with its staff. Flight service does maintain one of my favorite internet phenomenon, the phone number domain name: 1800wxbrief.com. One of the odd manifestations of the FSS/ATC schism and the FAA's very partial privatization is that Leidos maintains an online aviation weather portal that is separate from, and competes with, the Weather Service's aviationweather.gov. Since Flight Service traditionally has the responsibility for weather briefings, it is honestly unclear to what extend Leidos vs. the National Weather Service should be investing in aviation weather information services. For its part, the FAA seems to consider aviationweather.gov the official source, while it pays for 1800wxbrief.com. There's also weathercams.faa.gov, which duplicates a very large portion (maybe all?) of the weather information on Leidos's portal and some of the NWS's. It's just one of those things. Or three of those things, rather. Speaking of duplication due to poor planning... The National Airspace System Left in the lurch by the Air Force, the FAA launched its own program for ATC automation. While the Air Force was deploying SAGE, the FAA had mostly been waiting, and various ARTCCs had adopted a hodgepodge of methods ranging from one-off computer systems to completely paper-based tracking. By 1960 radar was ubiquitous, but different radar systems were used at different facilities, and correlation between radar contacts and flight plans was completely manual. The FAA needed something better, and with growing congressional support for ATC modernization, they had the money to fund what they called National Airspace System En Route Stage A. Further bolstering historical confusion between SAGE and ATC, the FAA decided on a practical, if ironic, solution: buy their own SAGE. In an upcoming article, we'll learn about the FAA's first fully integrated computerized air traffic control system. While the failed detour through SATIN delayed the development of this system, the nearly decade-long delay between the design of SAGE and the FAA's contract allowed significant technical improvements. This "New SAGE," while directly based on SAGE at a functional level, used later off-the-shelf computer equipment including the IBM System/360, giving it far more resemblance to our modern world of computing than SAGE with its enormous, bespoke AN/FSQ-7. And we're still dealing with the consequences today! [1] It also laid the groundwork for the consolidation of the industry, with a 1930 decision that took air mail contracts away from most of the smaller companies and awarded them instead to the precursors of United, TWA, and American Airlines.

4 weeks ago 19 votes
2025-05-04 iBeacons

You know sometimes a technology just sort of... comes and goes? Without leaving much of an impression? And then gets lodged in your brain for the next decade? Let's talk about one of those: the iBeacon. I think the reason that iBeacons loom so large in my memory is that the technology was announced at WWDC in 2013. Picture yourself in 2013: Steve Jobs had only died a couple of years ago, Apple was still widely viewed as a visionary leader in consumer technology, and WWDC was still happening. Back then, pretty much anything announced at an Apple event was a Big Deal that got Big Coverage. Even, it turns out, if it was a minor development for a niche application. That's the iBeacon, a specific solution to a specific problem. It's not really that interesting, but the valance of it's Apple origin makes it seem cool? iBeacon Technology Let's start out with what iBeacon is, as it's so simple as to be underwhelming. Way back in the '00s, a group of vendors developed a sort of "Diet Bluetooth": a wireless protocol that was directly based on Bluetooth but simplified and optimized for low-power, low-data-rate devices. This went through an unfortunate series of names, including the delightful Wibree, but eventually settled on Bluetooth Low Energy (BLE). BLE is not just lower-power, but also easier to implement, so it shows up in all kinds of smart devices today. Back in 2011, it was quite new, and Apple was one of the first vendors to adopt it. BLE is far less connection-oriented than regular Bluetooth; you may have noticed that BLE devices are often used entirely without conventional "pairing." A lot of typical BLE profiles involve just broadcasting some data into the void for any device that cares (and is in short range) to receive, which is pretty similar to ANT+ and unsurprisingly appears in ANT+-like applications of fitness monitors and other sensors. Of course, despite the simpler association model, BLE applications need some way to find devices, so BLE provides an advertising mechanism in which devices transmit their identifying info at regular intervals. And that's all iBeacon really is: a standard for very simple BLE devices that do nothing but transmit advertisements with a unique ID as the payload. Add a type field on the advertising packet to specify that the device is trying to be an iBeacon and you're done. You interact with an iBeacon by receiving its advertisements, so you know that you are near it. Any BLE device with advertisements enabled could be used this way, but iBeacons are built only for this purpose. The applications for iBeacon are pretty much defined by its implementation in iOS; there's not much of a standard even if only for the reason that there's not much to put in a standard. It's all obvious. iOS provides two principle APIs for working with iBeacons: the region monitoring API allows an app to determine if it is near an iBeacon, including registering the region so that the app will be started when the iBeacon enters range. This is useful for apps that want to do something in response to the user being in a specific location. The ranging API allows an app to get a list of all of the nearby iBeacons and a rough range from the device to the iBeacon. iBeacons can actually operate at substantial ranges---up to hundreds of meters for more powerful beacons with external power, so ranging mode can potentially be used as sort of a lightweight local positioning system to estimate the location of the user within a larger space. iBeacon IDs are in the format of a UUID, followed by a "major" number and a "minor" number. There are different ways that these get used, especially if you are buying cheap iBeacons and not reconfiguring them, but the general idea is roughly that the UUID identifies the operator, the major a deployment, and the minor a beacon within the deployment. In practice this might be less common than just every beacon having its own UUID due to how they're sourced. It would be interesting to survey iBeacon applications to see which they do. Promoted Applications So where do you actually use these? Retail! Apple seems to have designed the iBeacon pretty much exclusively for "proximity marketing" applications in the retail environment. It goes something like this: when you're in a store and open that store's app, the app will know what beacons you are nearby and display relevant content. For example, in a grocery store, the grocer's app might offer e-coupons for cosmetics when you are in the cosmetics section. That's, uhh, kind of the whole thing? The imagined universe of applications around the launch of iBeacon was pretty underwhelming to me, even at the time, and it still seems that way. That's presumably why iBeacon had so little success in consumer-facing applications. You might wonder, who actually used iBeacons? Well, Apple did, obviously. During 2013 and into 2014 iBeacons were installed in all US Apple stores, and prompted the Apple Store app to send notifications about upgrade offers and other in-store deals. Unsurprisingly, this Apple Store implementation was considered the flagship deployment. It generated a fair amount of press, including speculation as to whether or not it would prove the concept for other buyers. Around the same time, Apple penned a deal with Major League Baseball that would see iBeacons installed in MLB stadiums. For the 2014 season, MLB Advanced Marketing, a joint venture of team owners, had installed iBeacon technology in 20 stadiums. Baseball fans will be able to utilize iBeacon technology within MLB.com At The Ballpark when the award-winning app's 2014 update is released for Opening Day. Complete details on new features being developed by MLBAM for At The Ballpark, including iBeacon capabilities, will be available in March. What's the point? the iBeacons "enable the At The Ballpark app to play specific videos or offer coupons." This exact story repeats for other retail companies that have picked the technology up at various points, including giants like Target and WalMart. The iBeacons are simply a way to target advertising based on location, with better indoor precision and lower power consumption than GPS. Aiding these applications along, Apple integrated iBeacon support into the iOS location framework and further blurred the lines between iBeacon and other positioning services by introducing location-based-advertising features that operated on geofencing alone. Some creative thinkers did develop more complex applications for the iBeacon. One of the early adopters was a company called Exact Editions, which prepared the Apple Newsstand version of a number of major magazines back when "readable on iPad" was thought to be the future of print media. Exact Editions explored a "read for free" feature where partner magazines would be freely accessible to users at partnering locations like coffee shops and book stores. This does not seem to have been a success, but using the proximity of an iBeacon to unlock some paywalled media is at least a little creative, if probably ill-advised considering security considerations we'll discuss later. The world of applications raises interesting questions about the other half of the mobile ecosystem: how did this all work on Android? iOS has built-in support for iBeacons. An operating system service scans for iBeacons and dispatches notifications to apps as appropriate. On Android, there has never been this type of OS-level support, but Android apps have access to relatively rich low-level Bluetooth functionality and can easily scan for iBeacons themselves. Several popular libraries exist for this purpose, and it's not unusual for them to be used to give ported cross-platform apps more or less equivalent functionality. These apps do need to run in the background if they're to notify the user proactively, but especially back in 2013 Android was far more generous about background work than iOS. iBeacons found expanded success through ShopKick, a retail loyalty platform that installed iBeacons in locations of some major retailers like American Eagle. These powered location-based advertising and offers in the ShopKick app as well as retailer-specific apps, which is kind of the start of a larger, more seamless network, but it doesn't seem to have caught on. Honestly, consumers just don't seem to want location-based advertising that much. Maybe because, when you're standing in an American Eagle, getting ads for products carried in the American Eagle is inane and irritating. iBeacons sort of foresaw cooler screens in this regard. To be completely honest, I'm skeptical that anyone ever really believed in the location-based advertising thing. I mean, I don't know, the advertising industry is pretty good at self-deception, but I don't think there were ever any real signs of hyper-local smartphone-based advertising taking off. I think the play was always data collection, and advertising and special offers just provided a convenient cover story. Real Applications iBeacons are one of those technologies that feels like a flop from a consumer perspective but has, in actuality, enjoyed surprisingly widespread deployments. The reason, of course, is data mining. To Apple's credit, they took a set of precautions in the design of the iBeacon iOS features that probably felt sufficient in 2013. Despite the fact that a lot of journalists described iBeacons as being used to "notify a user to install an app," that was never actually a capability (a very similar-seeming iOS feature attached to Siri actually used conventional geofencing rather than iBeacons). iBeacons only did anything if the user already had an app installed that either scanned for iBeacons when in the foreground or registered for region notifications. In theory, this limited iBeacons to companies with which consumers already had some kind of relationship. What Apple may not have foreseen, or perhaps simply accepted, is the incredible willingness of your typical consumer brand to sell that relationship to anyone who would pay. iBeacons became, in practice, just another major advancement in pervasive consumer surveillance. The New York Times reported in 2019 that popular applications were including SDKs that reported iBeacon contacts to third-party consumer data brokers. This data became one of several streams that was used to sell consumer location history to advertisers. It's a little difficult to assign blame and credit, here. Apple, to their credit, kept iBeacon features in iOS relatively locked down. This suggests that they weren't trying to facilitate massive location surveillance. That said, Apple always marketed iBeacon to developers based on exactly this kind of consumer tracking and micro-targeting, they just intended for it to be done under the auspices of a single brand. That industry would obviously form data exchanges and recruit random apps into reporting everything in your proximity isn't surprising, but maybe Apple failed to foresee it. They certainly weren't the worst offender. Apple's promotion of iBeacon opened the floodgates for everyone else to do the same thing. During 2014 and 2015, Facebook started offering bluetooth beacons to businesses that were ostensibly supposed to facilitate in-app special offers (though I'm not sure that those ever really materialized) but were pretty transparently just a location data collection play. Google jumped into the fray in their Signature Google style, with an offering that was confusing, semi-secret, incoherently marketed, and short lived. Google's Project Beacon, or Google My Business, also shipped free Bluetooth beacons out to businesses to give Android location services a boost. Google My Business seems to have been the source of a fair amount of confusion even at the time, and we can virtually guarantee that (as reporters speculated at the time) Google was intentionally vague and evasive about the system to avoid negative attention from privacy advocates. In the case of Facebook, well, they don't have the level of opsec that Google does so things are a little better documented: Leaked documents show that Facebook worried that users would 'freak out' and spread 'negative memes' about the program. The company recently removed the Facebook Bluetooth beacons section from their website. The real deployment of iBeacons and closely related third-party iBeacon-like products [1] occurred at massive scale but largely in secret. It became yet another dark project of the advertising-industrial complex, perhaps the most successful yet of a long-running series of retail consumer surveillance systems. Payments One interesting thing about iBeacon is how it was compared to NFC. The two really aren't that similar, especially considering the vast difference in usable ranges, but NFC was the first radio technology to be adopted for "location marketing" applications. "Tap your phone to see our menu," kinds of things. Back in 2013, Apple had rather notably not implemented NFC in its products, despite its increasing adoption on Android. But, there is much more to this story than learning about new iPads and getting a surprise notification that you are eligible for a subsidized iPhone upgrade. What we're seeing is Apple pioneering the way mobile devices can be utilized to make shopping a better experience for consumers. What we're seeing is Apple putting its money where its mouth is when it decided not to support NFC. (MacObserver) Some commentators viewed iBeacon as Apple's response to NFC, and I think there's more to that than you might think. In early marketing, Apple kept positioning iBeacon for payments. That's a little weird, right, because iBeacons are a purely one-way broadcast system. Still, part of Apple's flagship iBeacon implementation was a payment system: Here's how he describes the purchase he made there, using his iPhone and the EasyPay system: "We started by using the iPhone to scan the product barcode and then we had to enter our Apple ID, pretty much the way we would for any online Apple purchase [using the credit card data on file with one's Apple account]. The one key difference was that this transaction ended with a digital receipt, one that we could show to a clerk if anyone stopped us on the way out." Apple Wallet only kinda-sorta existed at the time, although Apple was clearly already midway into a project to expand into consumer payments. It says a lot about this point in time in phone-based payments that several reporters talk about iBeacon payments as a feature of iTunes, since Apple was mostly implementing general-purpose billing by bolting it onto iTunes accounts. It seems like what happened is that Apple committed to developing a pay-by-phone solution, but decided against NFC. To be competitive with other entrants in the pay-by-phone market, they had to come up with some kind of technical solution to interact with retail POS, and iBeacon was their choice. From a modern perspective this seems outright insane; like, Bluetooth broadcasts are obviously not the right way to initiate a payment flow, and besides, there's a whole industry-standard stack dedicated to that purpose... built on NFC. But remember, this was 2013! EMV was not yet in meaningful use in the US; several major banks and payment networks had just committed to rolling it out in 2012 and every American can tell you that the process was long and torturous. Because of the stringent security standards around EMV, Android devices did not implement EMV until ARM secure enclaves became widely available. EMVCo, the industry body behind EMV, did not have a certification program for smartphones until 2016. Android phones offered several "tap-to-pay" solutions, from Google's frequently rebranded Google Wallet^w^wAndroid Pay^w^wGoogle Wallet to Verizon's embarrassingly rebranded ISIS^wSoftcard and Samsung Pay. All of these initially relied on proprietary NFC protocols with bespoke payment terminal implementations. This was sketchy enough, and few enough phones actually had NFC, that the most successful US pay-by-phone implementations like Walmart's and Starbucks' used barcodes for communication. It would take almost a decade before things really settled down and smartphones all just implemented EMV. So, in that context, Apple's decision isn't so odd. They must have figured that iBeacon could solve the same "initial handshake" problem as Walmart's QR codes, but more conveniently and using radio hardware that they already included in their phones. iBeacon-based payment flows used the iBeacon only to inform the phone of what payment devices were nearby, everything else happened via interaction with a cloud service or whatever mechanism the payment vendor chose to implement. Apple used their proprietary payments system through what would become your Apple Account, PayPal slapped together an iBeacon-based fast path to PayPal transfers, etc. I don't think that Apple's iBeacon-based payments solution ever really shipped. It did get some use, most notably by Apple, but these all seem to have been early-stage implementations, and the complete end-to-end SDK that a lot of developers expected never landed. You might remember that this was a very chaotic time in phone-based payments, solutions were coming and going. When Apple Pay was properly announced a year after iBeacons, there was little mention of Bluetooth. By the time in-store Apple Pay became common, Apple had given up and adopted NFC. Limitations One of the great weaknesses of iBeacon was the security design, or lack thereof. iBeacon advertisements were sent in plaintext with no authentication of any type. This did, of course, radically simplify implementation, but it also made iBeacon untrustworthy for any important purpose. It is quite trivial, with a device like an Android phone, to "clone" any iBeacon and transmit its identifiers wherever you want. This problem might have killed off the whole location-based-paywall-unlocking concept had market forces not already done so. It also opens the door to a lot of nuisance attacks on iBeacon-based location marketing, which may have limited the depth of iBeacon features in major apps. iBeacon was also positioned as a sort of local positioning system, but it really wasn't. iBeacon offers no actual time-of-flight measurements, only RSSI-based estimation of range. Even with correct on-site calibration (which can be aided by adjusting a fixed RSSI-range bias value included in some iBeacon advertisements) this type of estimation is very inaccurate, and in my little experiments with a Bluetooth beacon location library I can see swings from 30m to 70m estimated range based only on how I hold my phone. iBeacon positioning has never been accurate enough to do more than assert whether or not a phone is "near" the beacon, and "near" can take on different values depending on the beacon's transmit power. Developers have long looked towards Bluetooth as a potential local positioning solution, and it's never quite delivered. The industry is now turning towards Ultra-Wideband or UWB technology, which combines a high-rate, high-bandwidth radio signal with a time-of-flight radio ranging protocol to provide very accurate distance measurements. Apple is, once again, a technical leader in this field and UWB radios have been integrated into the iPhone 11 and later. Senescence iBeacon arrived to some fanfare, quietly proliferated in the shadows of the advertising industry, and then faded away. The Wikipedia article on iBeacons hasn't really been updated since support on Windows Phone was relevant. Apple doesn't much talk about iBeacons any more, and their compatriots Facebook and Google both sunset their beacon programs years ago. Part of the problem is, well, the pervasive surveillance thing. The idea of Bluetooth beacons cooperating with your phone to track your every move proved unpopular with the public, and so progressively tighter privacy restrictions in mobile operating systems and app stores have clamped down on every grocery store app selling location data to whatever broker bids the most. I mean, they still do, but it's gotten harder to use Bluetooth as an aid. Even Android, the platform of "do whatever you want in the background, battery be damned," strongly discourages Bluetooth scanning by non-foreground apps. Still, the basic technology remains in widespread use. BLE beacons have absolutely proliferated, there are plenty of apps you can use to list nearby beacons and there almost certainly are nearby beacons. One of my cars has, like, four separate BLE beacons going on all the time, related to a phone-based keyless entry system that I don't think the automaker even supports any more. Bluetooth beacons, as a basic primitive, are so useful that they get thrown into all kinds of applications. My earbuds are a BLE beacon, which the (terrible, miserable, no-good) Bose app uses to detect their proximity when they're paired to another device. A lot of smart home devices like light bulbs are beacons. The irony, perhaps, of iBeacon-based location tracking is that it's a victim of its own success. There is so much "background" BLE beacon activity that you scarcely need to add purpose-built beacons to track users, and only privacy measures in mobile operating systems and the beacons themselves (some of which rotate IDs) save us. Apple is no exception to the widespread use of Bluetooth beacons: iBeacon lives on in virtually every apple device. If you do try out a Bluetooth beacon scanning app, you'll discover pretty much every Apple product in a 30 meter radius. From MacBooks Pro to Airpods, almost all Apple products transmit iBeacon advertisements to their surroundings. These are used for the initial handshake process of peer-to-peer features like Airdrop, and Find My/AirTag technology seems to be derived from the iBeacon protocol (in the sense that anything can be derived from such a straightforward design). Of course, pretty much all of these applications now randomize identifiers to prevent passive use of device advertisements for long-term tracking. Here's some good news: iBeacons are readily available in a variety of form factors, and they are very cheap. Lots of libraries exist for working with them. If you've ever wanted some sort of location-based behavior for something like home automation, iBeacons might offer a good solution. They're neat, in an old technology way. Retrotech from the different world of 2013. It's retro in more ways than one. It's funny, and a bit quaint, to read the contemporary privacy concerns around iBeacon. If only they had known how bad things would get! Bluetooth beacons were the least of our concerns. [1] Things can be a little confusing here because the iBeacon is such a straightforward concept, and Apple's implementation is so simple. We could define "iBeacon" as including only officially endorsed products from Apple affiliates, or as including any device that behaves the same as official products (e.g. by using the iBeacon BLE advertisement type codes), or as any device that is performing substantially the same function (but using a different advertising format). I usually mean the latter of these three as there isn't really much difference between an iBeacon and ten million other BLE beacons that are doing the same thing with a slightly different identifier format. Facebook and Google's efforts fall into this camp.

a month ago 11 votes
2025-04-18 white alice

When we last talked about Troposcatter, it was Pole Vault. Pole Vault was the first troposcatter communications network, on the east coast of Canada. It would not be alone for long. By the time the first Pole Vault stations were complete, work was already underway on a similar network for Alaska: the White Alice Communication System, WACS. Alaska has long posed a challenge for communications. In the 1960s, Western Union wanted to extent their telegraph network from the United States into Europe. Although the technology would be demonstrated shortly after, undersea telegraph cables were still notional and it seemed that a route that minimized the ocean crossing would be preferable---of course, that route maximized the length on land, stretching through present-day Alaska and Siberia on each side of the Bering Strait. This task proved more formidable than Western Union had imagined, and the first transatlantic telegraph cable (on a much further south crossing) was completed before the arctic segments of the overland route. The "Western Union Telegraph Expedition" abandoned its work, leaving a telegraph line well into British Columbia that would serve as one of the principle communications assets in the region for decades after. This ill-fated telegraph line failed to link San Francisco to Moscow, but its aftermath included a much larger impact on Russian interests in North America: the purchase of Alaska in 1867. Shortly after, the US military began its expansion into the new frontier. The Army Signal Corps, mostly to fulfill its function in observing the weather, built and staffed small installations that stretched further and further west. Later, in the 1890s, a gold rush brought a sudden influx of American settlers to Alaska's rugged terrain. The sudden economic importance of Klondike, and the rather colorful personalities of the prospectors looking to exploit it, created a much larger need for military presence. Fortuitously, many of the forts present had been built by the Signal Corps, which had already started on lines of communication. Construction was difficult, though, and without Alaskan communications as major priority there was only minimal coverage. Things changed in 1900, when Congress appropriated a substantial budget to the Washington-Alaska Military Cable and Telegraph System. The Signal Corps set on Alaska like, well, like an army, and extensive telegraph and later telephone lines were built to link the various military outposts. Later renamed the Alaska Communications System, these cables brought the first telecommunication to much of Alaska. The arrival of the telegraph was quite revolutionary for remote towns, who could now receive news in real-time that had previously been delayed by as much as a year [1]. Telegraphy was important to civilians as well, something that Congress had anticipated: The original act authorizing the Alaska Communications System dictated that it would carry commercial traffic as well. The military had an unusual role in Alaska, and one aspect of it was telecommunications provider. In 1925, an outbreak of diphtheria began to claim the lives of children in Nome, a town in far western Alaska on the Seward Peninsula. The daring winter delivery of antidiphtheria serum by dog sled is widely remembered due to its tangential connection to the Iditarod, but there were two sides of the "serum run." The message from Nome's sole doctor requesting the urgent shipment was transmitted from Nome to the Public Health Service in DC over the Alaska Communications System. It gives us some perspective on the importance of the telegraph in Alaska that the 600 mile route to Nome took five days and many feats of heroism---but at the same time could be crossed instantaneously by telegrams. The Alaska Communications System included some use of radio from the beginning. A pair of HF radio stations specifically handled traffic for Nome, covering a 100-mile stretch too difficult for even the intrepid Signal Corps. While not a totally new technology to the military, radio was quite new to the telegraph business, and the ACS to Nome was probably the first commercial radiotelegraph system on the continent. By the 1930s, the condition of the Alaskan telegraph cables had decayed while demand for telephony had increased. Much of ACS was upgraded and modernized to medium-frequency radiotelephone links. In towns small and large, even in Anchorage itself, the sole telephone connection to the contiguous United States was an ACS telephone installed in the general store. Alaskan communications became an even greater focus of the military with the onset of the Second World War. A few weeks after Pearl Harbor, the Japanese attacked Fort Mears in the Aleutian Islands. Fort Mears had no telecommunications connections, so despite the proximity of other airbases support was slow to come. The lack of a telegraph or telephone line contributed to 43 deaths and focused attention on the ACS. By 1944, the Army Signal Corps had a workforce of 2,000 dedicated to Alaska. WWII brought more than one kind of attention to Alaska. Several Japanese assaults on the Aleutian islands represented the largest threats to American soil outside of Pearl Harbor, showing both Alaska's vulnerability and the strategic importance given to it by its relative proximity to Eurasia. WWII ended but, in 1949, the USSR demonstrated an atomic weapon. A combination of Soviet expansionism and the new specter of nuclear war turned military planners towards air defense. Like the Canadian Maritimes in the East, Alaska covered a huge swath of the airspace through which Soviet bombers might approach the US. Alaska was, once again, a battleground. The early Cold War military buildup of Alaska was particularly heavy on air defense. During the late '40s and early '50s, more than a dozen new radar and control sites were built. The doctrine of ground-controlled interception requires real-time communication between radar centers, stressing the limited number of voice channels available on the ACS. As early as 1948, the Signal Corps had begun experiments to choose an upgrade path. Canadian early-warning radar networks, including the Distant Early Warning Line, were on the drawing board and would require many communications channels in particularly remote parts of Alaska. Initially, point-to-point microwave was used in relatively favorable terrain (where the construction of relay stations about every 50 miles was practical). For the more difficult segments, the Signal Corps found that VHF radio could provide useful communications at ranges over 100 miles. VHF radiotelephones were installed at air defense radar stations, but there was a big problem: the airspace surveillance radar of the 1950s also operated in the VHF band, and caused so much interference with the radiotelephones that they were difficult to use. The radar stations were probably the most important users of the network, so VHF would have to be abandoned. In 1954, a military study group was formed to evaluate options for the ACS. That group, in turn, requested a proposal from AT&T. Bell Laboratories had been involved in the design and evaluation of Pole Vault, the first sites of which had been completed two years before, so they naturally positioned troposcatter as the best option. It is worth mentioning the unusual relationship AT&T had with Alaska, or rather, the lack of one. While the Bell System enjoyed a monopoly on telephony in most of the United States [2], they had never expanded into Alaska. Alaska was only a territory, after all, and a very sparsely populated one at that. The paucity of long-distance leads to or from Alaska (only one connected to Anchorage, for example) limited the potential for integration of Alaska into the broader Bell System anyway. Long-distance telecommunications in Alaska were a military project, and AT&T was involved only as a vendor. Because of the high cost of troposcatter stations, proven during Pole Vault construction, a hybrid was proposed: microwave stations could be spaced every 50 miles along the road network, while troposcatter would cover the long stretches without roads. In 1955, the Signal Corps awarded Western Electric a contract for the White Alice Communications System. The Corps of Engineers surveyed the locations of 31 sites, verifying each by constructing a temporary antenna tower. The Corps of Engineers led construction of the first 11 sites, and the final 20 were built on contract by Western Electric itself. All sites used radio equipment furnished by Western Electric and were built to Western Electric designs. Construction was far from straightforward. Difficult conditions delayed completion of the original network until 1959, two years later than intended. A much larger issue, though, was the budget. The original WACS was expected to cost $38 million. By the time the first 31 sites were complete, the bill totaled $113 million---equivalent to over a billion dollars today. Western Electric had underestimated not only the complexity of the sites but the difficulty of their construction. A WECo report read: On numerous occasions, the men were forced to surrender before the onslaught of cold, wind and snow and were immobilized for days, even weeks . This ordeal of waiting was of times made doubly galling by the knowledge that supplies and parts needed for the job were only a few miles distant but inaccessible because the white wall of winter had become impenetrable WACS initial capability included 31 stations, of which 22 were troposcatter and the remainder only microwave (using Western Electric's TD-2). A few stations were equipped with both troposcatter and microwave, serving as relays between the two carriers. In 1958, construction started on the Ballistic Missile Early Warning System or BMEWS. BMEWS was an over-the-horizon radar system intended to provide early warning of a Soviet attack. BMEWS would provide as little as 15 minutes warning, requiring that alerts reach NORAD in Colorado as quickly as possible. One BMEWS set was installed in Greenland, where the Pole Vault system was expanded to provide communications. Similarly, the BMEWS set at Clear Missile Early Warning Station in central Alaska relied on White Alice. Planners were concerned about the ability of the Soviet Union to suppress an alert by destroying infrastructure, so two redundant chains of microwave sites were added to White Alice. One stretched from Clear to Ketchikan where it connected to an undersea cable to Seattle. The other went east, towards Canada, where it met existing telephone cables on the Alaska Highway. A further expansion of White Alice started the next year, in 1959. Troposcatter sites were extended through the Aleutian islands in "Project Stretchout" to serve new DEW Line stations. During the 1960s, existing WACS sites were expanded and new antennas were installed at Air Force installations. These were generally microwave links connecting the airbases to existing troposcatter stations. In total, WACS reached 71 sites. Four large sites served as key switching points with multiple radio links and telephone exchanges. Pedro Dome, for example, had a 15,000 square foot communications building with dormitories, a power plant, and extensive equipment rooms. Support facilities included a vehicle maintenance building, storage warehouse, and extensive fuel tanks. A few WACS sites even had tramways for access between the "lower camp" (where equipment and personnel were housed) and the "upper camp" (where the antennas were located)... although they apparently did not fare well in the Alaskan conditions. While Western Electric had initially planned for six people and 25 KW of power at each station, the final requirements were typically 20 people and 120-180 KW of generator capacity. Some sites stored over half a million gallons of fuel---conditions often meant that resupply was only possible during the summer. Besides troposcatter and microwave radios, the equipment included tandem telephone exchanges. These are described in a couple of documents as "ATSS-4A," ATSS standing for Alaska Telephone Switching System. Based on the naming and some circumstantial evidence, I believe these were Western Electric 4A crossbar exchanges. They were later incorporated into AUTOVON, but also handled commercial long-distance traffic between Alaskan towns. With troposcatter comes large antennas, and depending on connection lengths, WACS troposcatter antennas ranged from 30' dishes to 120' "billboard" antennas similar to those seen at Pole Vault sites. The larger antennas handled up to 50kW of transmit power. Some 60' and 120' antennas included their own fuel tanks and steam plants that heated the antennas through winter to minimize snow accumulation. WACS represented an enormous improvement in Alaskan communications. The entire system was multi-channel with redundancy in many key parts of the network. Outside of the larger cities, WACS often brought the first usable long-distance telephone service. Even in Anchorage, WACS provided the only multi-channel connection. Despite these achievements, WACS was set for much the same fate as other troposcatter systems: obsolescence after the invention of communications satellites. The experimental satellites Telstar 1 and 2 launched in the early 1960s, and the military began a shift towards satellite communications shortly after. Besides, the formidable cost of WACS had become a political issue. Maintenance of the system overran estimates by just as much as construction, and placing this cost on taxpayers was controversial since much of the traffic carried by the system consisted of regular commercial telephone calls. Besides, a general reticence to allocate money to WACS had lead to a general decay of the system. WACS capacity was insufficient for the rapidly increasing long-distance telephone traffic of the '60s, and due to decreased maintenance funding reliability was beginning to decline. The retirement of a Cold War communications system is not unusual, but the particular fate of WACS is. It entered a long second life. After acting as the sole long-distance provider for 60 years, the military began its retreat. In 1969, Congress passed the Alaska Communications Disposal Act. It called for complete divestment of the Alaska Communications System and WACS, to a private owner determined by a bidding process. Several large independent communications companies bid, but the winner was RCA. Committing to a $28.5 million purchase price followed by $30 million in upgrades, RCA reorganized the Alaska Communications System as RCA Alascom. Transfer of the many ACS assets from the military to RCA took 13 years, involving both outright transfer of property and complex lease agreements on sites colocated with military installations. RCA's interest in Alaskan communications was closely connected to the coming satellite revolution: RCA had just built the Bartlett Earth Station, the first satellite ground station in Alaska. While Bartlett was originally an ACS asset owned by the Signal Corps, it became just the first of multiple ground stations that RCA would build for Alascom. Several of the new ground stations were colocated with WACS sites, establishing satellite as an alternative to the troposcatter links. Alascom appears to have been the first domestic satellite voice network in commercial use, initially relying on a Canadian communications satellite [3]. In 1974, SATCOM 1 and 2 launched. These were not the first commercial communications satellites, but they represented a significant increase in capacity over previous commercial designs and are sometimes thought of as the true beginning of the satellite communications era. Both were built and owned by RCA, and Alascom took advantage of the new transponders. At the same time, Alascom launched a modernization effort. 22 of the former WACS stations were converted to satellite ground stations, a project that took much of the '70s as Alascom struggled with the same conditions that had made WACS so challenging to begin with. Modernization also included the installation of DMS-10 telephone switches and conversion of some connections to digital. A series of regulatory and business changes in the 1970s lead RCA to step away from the domestic communications industry. In 1979, Alascom sold to Pacific Power and Light, now for $200 million and $90 million in debt. PP&L continued on much the same trajectory, expanding the Alascom system to over 200 ground stations and launching the satellite Aurora I---the first of a small series of satellites that gave Alaska the distinction of being the only state with its own satellite communications network. For much of the '70s to the '00s, large parts of Alaska relied on satellite relay for calls between towns. In a slight twist of irony considering its long lack of interest in the state, AT&T purchased parts of Alascom from PP&L in 1995, forming AT&T Alascom which has faded away as an independent brand. Other parts of the former ACS network, generally non-toll (or non-long-distance) operations, were split off into then PP&L subsidiary CenturyTel. While CenturyTel has since merged into CenturyLink, the Alaskan assets were first sold to Alaska Communications. Alaska Communications considers itself the successor of the ACS heritage, giving them a claim to over 100 years of communications history. As electronics technology has continued to improve, penetration of microwave relays into inland Alaska has increased. Fewer towns rely on satellite today than in the 1970s, and the half-second latency to geosynchronous orbit is probably not missed. Alaska communications have also become more competitive, with long-distance connectivity available from General Communications (GCI) as well as AT&T and Alaska Communications. Still, the legacy of Alaska's complex and expensive long-distance infrastructure still echoes in our telephone bills. State and federal regulators have allowed for extra fees on telephone service in Alaska and calls into Alaska, both intended to offset the high cost of infrastructure. Alaska is generally the most expensive long-distance calling destination in the United States, even when considering the territories. But what of White Alice? The history of the Alaska Communications System's transition to private ownership is complex and not especially well documented. While RCA's winning bid following the Alaska Communications Disposal Act set the big picture, the actual details of the transition were established by many individual negotiations spanning over a decade. Depending on the station, WACS troposcatter sites generally conveyed to RCA in 1973 or 1974. Some, colocated with active military installations, were leased rather than included in the sale. RCA generally decommissioned each WACS site once a satellite ground station was ready to replace it, either on-site or nearby. For some WACS sites, this meant the troposcatter equipment was shut down in 1973. Others remained in use later. The Boswell Bay troposcatter station seems to have been the last turned down, in 1985. The 1980s were decidedly the end of WACS. Alascom's sale to PP&L cemented the plan to shut down all troposcatter operations, and the 1980 Comprehensive Environmental Response, Compensation, and Liability Act lead to the establishment of the Formerly Used Defense Sites (FUDS) program within DoD. Under FUDS, the Corps of Engineers surveyed the disused WACS sites and found nearly all had significant contamination by asbestos (used in seemingly every building material in the '50s and '60s) and leaked fuel oil. As a result, most White Alice sites were demolished between 1986 and 1999. The cost of demolition and remediation in such remote locations was sometimes greater than the original construction. No WACS sites remain intact today. Postscript: A 1988 Corps of Engineers historical inventory of WACS, prepared due to the demolition of many of the stations, mentions that meteor burst communications might replace troposcatter. Meteor burst is a fascinating communications mode, similar in many ways to troposcatter but with the twist that the reflecting surface is not the troposphere but the ionized trail of meteors entering the atmosphere. Meteor burst connections only work when there is a meteor actively vaporizing in the upper atmosphere, but atmospheric entry of small meteors is common enough that meteor burst communications are practical for low-rate packetized communications. For example, meteor burst has been used for large weather and agricultural telemetry systems. The Alaska Meteor Burst Communications System was implemented in 1977 by several federal agencies, and was used primarily for automated environmental telemetry. Unlike most meteor burst systems, though, it seems to have been used for real-time communications by the BLM and FAA. I can't find much information, but they seem to have built portable teleprinter terminals for this use. Even more interesting, the Air Force's Alaskan Air Command built its own meteor burst network around the same time. This network was entirely for real-time use, and demonstrated the successful transmission of radar track data from radar stations across the state to Elmendorf Air Force base. Even better, the Air Force experimented with the use of meteor burst for intercept control by fitting aircraft with a small speech synthesizer that translated coded messages into short phrases. The Air Force experimented with several meteor burst systems during the Cold War, anticipating that it might be a survivable communications system in wartime. More details on these will have to fill a future article. [1] Crews of the Western Union Telegraph Expedition reportedly continued work for a full year after the completion of the transatlantic telephone cable, because news of it hadn't reached them yet. [2] Eliding here some complexities like GTE and their relationship to the Bell System. [3] Perhaps owing to the large size of the country and many geographical challenges to cable laying, Canada has often led North America in satellite communications technology.

a month ago 18 votes

More in technology

There's not much point in buying Commodore

Bona fides: Commodore 128DCR on my desk with a second 1571, Ultimate II+-L and a ZoomFloppy, three SX-64s I use for various projects, heaps of spare 128DCRs, breadbox 64s, 16s, Plus/4s and VIC-20s on standby, multiple Commodore collectables (blue-label PET 2001, C64GS, 116, TV Games, 1551, 1570), a couple A500s, an A3000 and a AmigaOS 3.9 QuikPak A4000T with '060 CPU, Picasso IV RTG card and Ethernet. I wrote for COMPUTE!'s Gazette (during the General Media years) and Loadstar. Here's me with Jack Tramiel and his son Leonard from a Computer History Museum event in 2007. It's on my wall. Retro Recipes video (not affiliated) stating that, in answer to a request for a very broad license to distribute under the Commodore name, Commodore Corporation BV instead simply proposed he buy them out, which would obviously transfer the trademark to him outright. Amiga News has a very nice summary. There was a time when Commodore intellectual property and the Commodore brand had substantial value, and that time probably ended around the mid-2000s. Prior to that point after Commodore went bankrupt in 1994, a lot of residual affection for the Amiga and the 64/128 still circulated, the AmigaOS still had viability for some applications and there might have been something to learn from the hardware, particularly the odder corners like the PA-RISC Hombre. That's why there was so much turmoil over the corpse, from Escom's abortive buyout to the split of the assets. Today the Commodore name (after many shifts and purchases and reorgs) is presently held by Commodore Corporation BV, a Netherlands company, who licenses it out. Pretty much the rest of it is split into the hardware patents (now with Acer after their buyout of Gateway 2000) and the remaining IP (Amiga Corporation, effectively Cloanto). The Commodore brand after the company's demise has had an exceptionally poor track record in the market. Many of us remember the 1999 Commodore 64 Web.it, licensed by Escom, which was a disastrously bad set-top 486 PC sold as an "Internet computer" whose only link to CBM was the Commodore name and a built-in 64 emulator. Reviewers savaged it and they've become collectors' items purely for the lulz. In 2007, Tulip licensee Commodore Gaming tried again with PC gaming rigs sold as the Commodore XX, GS, GX and G (are these computers or MPAA ratings?) and special wraps called C=kins (say it "skins"). I went to the launch party in L.A. — 8-Bit Weapon was there, hi Seth and Michelle! — and I even have one of their T-shirts around someplace. The company subsequently ran out of money and their most consequential legacy was the huge and heavily branded case. More recently, in 2010, another American company called itself Commodore USA LLC and tried developing new keyboard computers, most notably the (first) Commodore 64x. These were otherwise underpowered PCs using mini-ATX motherboards in breadboard-like cases where cooling was an obvious issue. They also tried selling "VICs" (which didn't look like VIC-20s) and "Amigas" (which were Intel i7 systems), and introduced their own Linux-based Commodore OS. Opinions were harsh and the company went under after its CEO died in 2012. Dishonourable mentions include Tulip-Yeahronimo's 2004 MP3 player line, sold as the (inexplicably) e-VIC, m-PET and f-PET, and the PET smartphone, a 2015 otherwise unremarkable Android device with its own collection of on-board emulators. No points for guessing how much of an impact those made. And none of this is really specific to Commodore, either: look at the shambling corpse of Atari SA, made to dance on decaying strings by the former Infogrames' principals. I mean, cryptocurrency and hotels straight out of Blade Runner — really? The exception to the rule was the 2004 C64DTV, a Tulip-licensed all-in-one direct-to-TV console containing a miniaturized and enhanced Commodore 64 designed by Jeri Ellsworth in a Competition Pro-style joystick. It played many built-in games from flash storage but more importantly could be easily modded into a distinct Commodore computer of its own, complete with keyboard and IEC serial ports, and VICE even emulates it. It sold well enough to go through two additional hardware revisions and the system turned up in other contemporary DTVs (like the DTV3 in the Hummer DTV game). There are also the 2019 "TheC64" machines, in both mini and full-size varieties (not affiliated), which are pretty much modern direct-to-TV systems in breadbin cases that run built-in games under emulation. The inclusion of USB "Comp Pro" styled joysticks is an obvious secondary homage to the C64DTV. Notably, Retro Games Ltd licensed the Commodore 64 ROMs from Cloanto but didn't license the Commodore trademark, so the name Commodore never appears anywhere on the box or the machine (though you decide if the trade dress is infringing). The remnant of the 64x was its case moulds, which were bought by My Retro Computer Ltd in the UK after Commodore USA LLC went under and that's where this story picks up, selling an officially licened new version of the 64x (also not affiliated) after Commodore Corporation BV granted permission in 2022. This new 64x comes in three pre-built configurations or as a bare case. By buying out the Commodore name they would get to sell these without the (frankly exorbitant) fees CC BV was charging and extend the brand to other existing Commodore re-creations like the Mega 65, but the video also has more nebulous aims, such as other retro Commodore products (Jeri Ellsworth herself appears in this video) or something I didn't quite follow about a Commodore charity arcade for children's hospitals, or other very enthusiastically expressed yet moderately unclear goals. I've been careful not to say there's no point in buying the Commodore trademark — I said there's not much. There is clearly a market for reimplementing classic Commodore hardware; Ellsworth herself proved it with the C64DTV, and current devices like the (also not affiliated with any) Mega 65, Ultimate64 and Kawari VIC-II still sell. But outside of the retro niche, Commodore as a brand name is pretty damn dead. Retro items sell only small numbers in boutique markets. Commodore PCs and Commodore smartphones don't sell because the Commodore name adds nothing now to a PC or handset, and the way we work with modern machines — for better or worse — is worlds different than how we worked with a 1982 home computer. No one expects to interact with, say, a Web page or a smartphone app in the same way we used a BASIC program or a 5.25" floppy. Maybe we should, but we don't. Furthermore, there's also the very pertinent question of how to steward such a community resource. The effort is clearly earnest, genuine and heartfelt, but that's not enough without governance. Letting these obviously hobbyist projects become full-fledged members of the extended Commodore family seems reasonable and even appropriate, but then there's the issue of preventing the Shenzhen back alley cloners from ripping them (and you) off. Plus, even these small products do make some money. What's FRAND in a situation like this? How would you enforce it? Should you enforce it? Does everyone who chips in get some fraction of a vote or some piece of the action? If the idea is only to allow the Commodore name to be applied to projects of sufficient quality and/or community benefit, who decides? Better to let it rest in peace and stop encouraging these bloodsuckers to drain what life and goodwill remain in the Commodore name. The crap products that came before only benefited the licensor and just make the brand more tawdry. CC BV only gets to do what it does because it's allowed to. TheC64 systems sold without the Commodore trademark because it was obvious what they were and what they do; Mega 65s and Ultimate64s are in the same boat. Commodore enthusiasts like me know what these systems are. We'll buy them on their merits, or not, whether the Commodore name is on the label, or not (and they will likely be cheaper if they don't). CC BV reportedly has been trying to sell off the trademark for awhile, which seems to hint that they too recognize the futility. Don't fall into their trap.

10 hours ago 2 votes
RIP Bill Atkinson

As posted by his family (Facebook link), Bill Atkinson passed away on June 5 from pancreatic cancer at the age of 74. The Macintosh would not have been the same without him (QuickDraw, MacPaint, HyperCard, and so much more). Rest in peace.

2 days ago 2 votes
This robotic tongue drummer bangs out all the ambient hits

If you like to listen to those “deep focus” soundtracks that are all ambient and relaxing, then you’ve heard a tongue drum in action. A tongue drum, or tank drum, is a unique percussion instrument traditionally made from an empty propane cylinder — though purpose-built models are now common. Several tongues are cut into one […] The post This robotic tongue drummer bangs out all the ambient hits appeared first on Arduino Blog.

2 days ago 2 votes
Lenovo ThinkCentre M900 Tiny: how does it fare as a home server?

My evenings of absent-minded local auction site scrolling1 paid off: I now own a Lenovo ThinkCentre M900 Tiny. It’s relatively old, being manufactured in 20162, but it’s tiny and has a lot of useful life left in it. It’s also featured in the TinyMiniMicro series by ServeTheHome. I managed to get it for 60 EUR plus about 4 EUR shipping, and it comes with solid specifications: CPU: Intel i5-6500T RAM: 16GB DDR4 Storage: 256GB SSD Power adapter included The price is good compared to similar auctions, but was it worth it? Yes, yes it was. I have been running a ThinkPad T430 as a server for a while now, since October 2024. It served me well in that role and would’ve served me for even longer if I wanted to, but I had an itch for a project that didn’t involve renovating an apartment.3 Power usage One of my main curiosities was around the power usage. Will this machine beat the laptop in terms of efficiency while idling and running normal home server workloads? Yes, yes it does. While booting into Windows 11 and calming down a bit, the lowest idle power numbers I saw were around 8 W. This concludes the testing on Windows. On Linux (Fedora Server 42), the idle power usage was around 6.5 W to 7 W. After running powertop --auto-tune, I ended up getting that down to 6.1 W - 6.5 W. This is much lower compared to the numbers that ServeTheHome got, which were around 11-13 W (120V circuit). My measurements are made in Europe, Estonia, where we have 240V circuits. You may be able to find machines where the power usage is even lower. Louwrentius mada an idle power comparison on an HP EliteDesk Mini G3 800 where they measured it at 4 W. That might also be due to other factors in play, or differences in measurement tooling. During normal home server operation with 5 SATA SSD-s connected (4 of them with USB-SATA adapters), I have observed power consumption being around 11-15 W, with peaks around 40 W. On a pure CPU load with stress -c 8, I saw power consumption being around 32 W. Formatting the internal SATA SSD added 5 W to that figure. USB storage, are you crazy? Yes. But hear me out. Back in 2021, I wrote about USB storage being a very bad idea, especially on BTRFS. I’ve learned a lot over the years, and BTRFS has received continuous improvements as well. In my ThinkPad T430 home server setup, I had two USB-connected SSD-s running in RAID0 for over half a year, and it was completely fine unless you accidentally bumped into the SSD-s. USB-connected storage is fine under the right circumstances: the cables are not damaged the cables are not at a weird angle or twisted I actually had issues with this point, my very cool and nice cable management resulted in one disk having connectivity issues, which I fixed by relieving stress on the cables and routing them differently the connected PC does not have chronic overheating issues the whole setup is out of the reach of cats, dogs, children and clumsy sysadmin cosplayers the USB-SATA adapters pass through the device ID and S.M.A.R.T information to the host the device ID part especially is key to avoiding issues with various filesystems (especially ZFS) and storage pool setups the ICY BOX IB-223U3a-B is a good option that I have personally been very happy with, and it’s what I’m using in this server build a lot of adapters (mine included) don’t support running SSD TRIM commands to the drives, which might be a concern has not been an issue for over half a year with those ICY BOX adapters, but it’s something to keep in mind you are not using an SBC as the home server even a Raspberry Pi 4 can barely handle one USB-powered SSD not an issue if you use an externally powered drive, or an USB DAS After a full BTRFS scrub and a few days of running, it seems fine. Plus it looks sick as hell with the identical drives stacked on top. All that’s missing are labels specifying which drive is which, but I’m sure that I’ll get to that someday, hopefully before a drive failure happens. In a way, this type of setup best represents what a novice home server enthusiast may end up with: a tiny, power-efficient PC with a bunch of affordable drives connected. Less insane storage ideas for a tiny PC There are alternative options for handling storage on a tiny 1 liter PC, but they have some downsides that I don’t want to be dealing with right now. An USB DAS allows you to handle many drives with ease, but they are also damn expensive. If you pick wrong, you might also end up with one where the USB-SATA chip craps out under high load, which will momentarily drop all the drives, leaving you with a massive headache to deal with. Cheaper USB-SATA docks are more prone to this, but I cannot confirm or deny if more expensive options have the same issue. Running individual drives sidesteps this issue and moves any potential issues to the host USB controller level. There is also a distinct lack of solutions that are designed around 2.5" drives only. Most of them are designed around massive and power-hungry 3.5" drives. I just want to run my 4 existing SATA SSD-s until they crap out completely. An additional box that does stuff generally adds to the overall power consumption of the setup as well, which I am not a big fan of. Lowering the power consumption of the setup was the whole point! I can’t rule out testing USB DAS solutions in the future as they do seem handy for adding storage to tiny PC-s and laptops with ease, but for now I prefer going the individually connected drives route, especially because I don’t feel like replacing my existing drives, they still have about 94% SSD health in them after 3-4 years of use, and new drives are expensive. Or you could go full jank and use that one free NVMe slot in the tiny PC to add more SATA ports or break out to other devices, such as a PCIe HBA, and introduce a lot of clutter to the setup with an additional power supply, cables and drives. Or use 3.5" external hard drives with separate power adapters. It’s what I actually tried out back in 2021, but I had some major annoyances with the noise. Miscellaneous notes Here are some notes on everything else that I’ve noticed about this machine. The PC is quite efficient as demonstrated by the power consumption numbers, and as a result it runs very cool, idling around 30-35 °C in a ~22-24 °C environment. Under a heavy load, the CPU temperatures creep up to 65-70 °C, which is perfectly acceptable. The fan does come on at higher load and it’s definitely audible, but in my case it runs in a ventilated closet, so I don’t worry about that at all. The CPU (Intel i5-6500T) is plenty fast for all sorts of home server workloads with its 4 CPU cores and clock speeds of 2.7-2.8 GHz under load. The UEFI settings offered a few interesting options that I decided to change, the rest are set to default. There is an option to enable an additional C-state for even better power savings. For home server workloads, it was nice to see the setting to allow you to boot the PC without a keyboard being attached, found under “Keyboardless operation” setting. I guess that in some corporate environments disconnected keyboards are such a common helpdesk issue that it necessitates having this option around. Closing thoughts I just like these tiny PC boxes a lot. They are tiny, fast and have a very solid construction, which makes them feel very premium in your hands. They are also perfectly usable, extensible and can be an absolute bargain at the right price. With solid power consumption figures that are only a few watts off of a Raspberry Pi 5, it might make more sense to get a TinyMiniMicro machine for your next home server. I’m definitely very happy with mine. well, at least it beats doom-scrolling social media. ↩︎ yeah, I don’t like being reminded of being old, too. ↩︎ there are a lot of similarities between construction/renovation work and software development, but that’s a story for another time. ↩︎

3 days ago 4 votes