Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]

New here?

Welcome! BoredReading is a fresh way to read high quality articles (updated every hour). Our goal is to curate (with your help) Michelin star quality articles (stuff that's really worth reading). We currently have articles in 0 categories from architecture, history, design, technology, and more. Grab a cup of freshly brewed coffee and start reading. This is the best way to increase your attention span, grow as a person, and get a better understanding of the world (or atleast that's why we built it).

104
Last week, someone leaked a spreadsheet of SoundThinking sensors to Wired. You are probably asking "What is SoundThinking," because the company rebranded last year. They used to be called ShotSpotter, and their outdoor acoustic gunfire detection system still goes by the ShotSpotter name. ShotSpotter has attracted a lot of press and plenty of criticism for the gunfire detection service they provide to many law enforcement agencies in the US. The system involves installing acoustic sensors throughout a city, which use some sort of signature matching to detect gunfire and then use time of flight to determine the likely source. One of the principle topics of criticism is the immense secrecy with which they operate: ShotSpotter protects information on the location of its sensors as if it were state secret, and does not disclose them even to the law enforcement agencies that are its customers. This secrecy attracts accusations that ShotSpotter's claims of efficacy cannot be independently...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from computers are bad

2025-04-06 Airfone

We've talked before about carphones, and certainly one of the only ways to make phones even more interesting is to put them in modes of transportation. Installing telephones in cars made a lot of sense when radiotelephones were big and required a lot of power; and they faded away as cellphones became small enough to have a carphone even outside of your car. There is one mode of transportation where the personal cellphone is pretty useless, though: air travel. Most readers are probably well aware that the use of cellular networks while aboard an airliner is prohibited by FCC regulations. There are a lot of urban legends and popular misconceptions about this rule, and fully explaining it would probably require its own article. The short version is that it has to do with the way cellular devices are certified and cellular networks are planned. The technical problems are not impossible to overcome, but honestly, there hasn't been a lot of pressure to make changes. One line of argument that used to make an appearance in cellphones-on-airplanes discourse is the idea that airlines or the telecom industry supported the cellphone ban because it created a captive market for in-flight telephone services. Wait, in-flight telephone services? That theory has never had much to back it up, but with the benefit of hindsight we can soundly rule it out: not only has the rule persisted well past the decline and disappearance of in-flight telephones, in-flight telephones were never commercially successful to begin with. Let's start with John Goeken. A 1984 Washington Post article tells us that "Goeken is what is called, predictably enough, an 'idea man.'" Being the "idea person" must not have had quite the same connotations back then, it was a good time for Goeken. In the 1960s, conversations with customers at his two-way radio shop near Chicago gave him an idea for a repeater network to allow truckers to reach their company offices via CB radio. This was the first falling domino in a series that lead to the founding of MCI and the end of AT&T's long-distance monopoly. Goeken seems to have been the type who grew bored with success, and he left MCI to take on a series of new ventures. These included an emergency medicine messaging service, electrically illuminated high-viz clothing, and a system called the Mercury Network that built much of the inertia behind the surprisingly advanced computerization of florists [1]. "Goeken's ideas have a way of turning into dollars, millions of them," the Washington Post continued. That was certainly true of MCI, but every ideas guy had their misses. One of the impressive things about Goeken was his ability to execute with speed and determination, though, so even his failures left their mark. This was especially true of one of his ideas that, in the abstract, seemed so solid: what if there were payphones on commercial flights? Goeken's experience with MCI and two-way radios proved valuable, and starting in the mid-1970s he developed prototype air-ground radiotelephones. In its first iteration, "Airfone" consisted of a base unit installed on an aircraft bulkhead that accepted a credit card and released a cordless phone. When the phone was returned to the base station, the credit card was returned to the customer. This equipment was simple enough, but it would require an extensive ground network to connect callers to the telephone system. The infrastructure part of the scheme fell into place when long-distance communications giant Western Union signed on with Goeken Communications to launch a 50/50 joint venture under the name Airfone, Inc. Airfone was not the first to attempt air-ground telephony---AT&T had pursued the same concept in the 1970s, but abandoned it after resistance from the FCC (unconvinced the need was great enough to justify frequency allocations) and the airline industry (which had formed a pact, blessed by the government, that prohibited the installation of telephones on aircraft until such time as a mature technology was available to all airlines). Goeken's hard-headed attitude, exemplified in the six-year legal battle he fought against AT&T to create MCI, must have helped to defeat this resistance. Goeken brought technical advances, as well. By 1980, there actually was an air-ground radiotelephone service in general use. The "General Aviation Air-Ground Radiotelephone Service" allocated 12 channels (of duplex pairs) for radiotelephony from general aviation aircraft to the ground, and a company called Wulfsberg had found great success selling equipment for this service under the FliteFone name. Wulfsberg FliteFones were common equipment on business aircraft, where they let executives shout "buy" and "sell" from the air. Goeken referred to this service as evidence of the concept's appeal, but it was inherently limited by the 12 allocated channels. General Aviation Air-Ground Radiotelephone Service, which I will call AGRAS (this is confusing in a way I will discuss shortly), operated at about 450MHz. This UHF band is decidedly line-of-sight, but airplanes are very high up and thus can see a very long ways. The reception radius of an AGRAS transmission, used by the FCC for planning purposes, was 220 miles. This required assigning specific channels to specific cities, and there the limits became quite severe. Albuquerque had exactly one AGRAS channel available. New York City got three. Miami, a busy aviation area but no doubt benefiting from its relative geographical isolation, scored a record-setting four AGRAS channels. That meant AGRAS could only handle four simultaneous calls within a large region... if you were lucky enough for that to be the Miami region; otherwise capacity was even more limited. Back in the 1970s, AT&T had figured that in-flight telephones would be very popular. In a somewhat hand-wavy economic analysis, they figured that about a million people flew in the air on a given day, and about a third of them would want to make telephone calls. That's over 300,000 calls a day, clearly more than the limited AGRAS channels could handle... leading to the FCC's objection that a great deal of spectrum would have to be allocated to make in-flight telephony work. Goeken had a better idea: single-sideband. SSB is a radio modulation technique that allows a radio transmission to fit within a very narrow bandwidth (basically by suppressing half of the signal envelope), at the cost of a somewhat more fiddly tuning process for reception. SSB was mostly used down in the HF bands, where the low frequencies meant that bandwidth was acutely limited. Up in the UHF world, bandwidth seemed so plentiful that there was little need for careful modulation techniques... until Goeken found himself asking the FCC for 10 blocks of 29 channels each, a lavish request that wouldn't really fit anywhere in the popular UHF spectrum. The use of UHF SSB, pioneered by Airfone, allowed far more efficient use of the allocation. In 1983, the FCC held hearings on Airfone's request for an experimental license to operate their SSB air-ground radiotelephone system in two allocations (separate air-ground and ground-air ranges) around 850MHz and 895MHz. The total spectrum allocated was about 1.5MHz in each of the two directions. The FCC assented and issued the experimental license in 1984, and Airfone was in business. Airfone initially planned 52 ground stations for the system, although I'm not sure how many were ultimately built---certainly 37 were in progress in 1984, at a cost of about $50 million. By 1987, the network had reportedly grown to 68. Airfone launched on six national airlines (a true sign of how much airline consolidation has happened in recent decades---there were six national airlines?), typically with four cordless payphones on a 727 or similar aircraft. The airlines received a commission on the calling rates, and Airfone installed the equipment at their own expense. Still, it was expected to be profitable... Airfone projected that 20-30% of passengers would have calls to make. I wish I could share more detail on these ground stations, in part because I assume there was at least some reuse of existing Western Union facilities (WU operated a microwave network at the time and had even dabbled in cellular service in the 1980s). I can't find much info, though. The antennas for the 800MHz band would have been quite small, but the 1980s multiplexing and control equipment probably took a fare share of floorspace. Airfone was off to a strong start, at least in terms of installation base and press coverage. I can't say now how many users it actually had, but things looked good enough that in 1986 Western Union sold their share of the company to GTE. Within a couple of years, Goeken sold his share to GTE as well, reportedly as a result of disagreements with GTE's business strategy. Airfone's SSB innovation was actually quite significant. At the same time, in the 1980s, a competitor called Skytel was trying to get a similar idea off the ground with the existing AGRAS allocation. It doesn't seem to have gone anywhere, I don't think the FCC ever approved it. Despite an obvious concept, Airfone pretty much launched as a monopoly, operating under an experimental license that named them alone. Unsurprisingly there was some upset over this apparent show of favoritism by the FCC, including from AT&T, which vigorously opposed the experimental license. As it happened, the situation would be resolved by going the other way: in 1990, the FCC established the "commercial aviation air-ground service" which normalized the 800 MHz spectrum and made licenses available to other operators. That was six years after Airfone started their build-out, though, giving them a head start that severely limited competition. Still, AT&T was back. AT&T introduced a competing service called AirOne. AirOne was never as widely installed as Airfone but did score some customers including Southwest Airlines, which only briefly installed AirOne handsets on their fleet. "Only briefly" describes most aspects of AirOne, but we'll get to that in a moment. The suddenly competitive market probably gave GTE Airfone reason to innovate, and besides, a lot had changed in communications technology since Airfone was designed. One of Airfone's biggest limitations was its lack of true roaming: an Airfone call could only last as long as the aircraft was within range of the same ground station. Airfone called this "30 minutes," but you can imagine that people sometimes started their call near the end of this window, and the problem was reportedly much worse. Dropped calls were common, adding insult to the injury that Airfone was decidedly expensive. GTE moved towards digital technology and automation. 1991 saw the launch of Airfone GenStar, which used QAM digital modulation to achieve better call quality and tighter utilization within the existing bandwidth. Further, a new computerized network allowed calls to be handed off from one ground station to another. Capitalizing on the new capacity and reliability, the aircraft equipment was upgraded as well. The payphone like cordless stations were gone, replaced by handsets installed in seatbacks. First class cabins often got a dedicated handset for every seat, economy might have one handset on each side of a row. The new handsets offered RJ11 jacks, allowing the use of laptop modems while in-flight. Truly, it was the future. During the 1990s, satellites were added to the Airfone network as well, improving coverage generally and making telephone calls possible on overseas flights. Of course, the rise of satellite communications also sowed the seeds of Airfone's demise. A company called Aircell, which started out using the cellular network to connect calls to aircraft, rebranded to Gogo and pivoted to satellite-based telephone services. By the late '90s, they were taking market share from Airfone, a trend that would only continue. Besides, for all of its fanfare, Airfone was not exactly a smash hit. Rates were very high, $5 a minute in the late '90s, giving Airfone a reputation as a ripoff that must have cut a great deal into that "20-30% of fliers" they hoped to serve. With the rise of cellphones, many preferred to wait until the aircraft was on the ground to use their own cellphone at a much lower rate. GTE does not seem to have released much in the way of numbers for Airfone, but it probably wasn't making them rich. Goeken, returning to the industry, inadvertently proved this point. He aggressively lobbied the FCC to issue competitive licenses, and ultimately succeeded. His second company in the space, In-Flight Phone Inc., became one of the new competitors to his old company. In-Flight Phone did not last for long. Neither did AT&T AirOne. A 2005 FCC ruling paints a grim picture: Current 800 MHz Air-Ground Radiotelephone Service rules contemplate six competing licensees providing voice and low-speed data services. Six entities were originally licensed under these rules, which required all systems to conform to detailed technical specifications to enable shared use of the air-ground channels. Only three of the six licensees built systems and provided service, and two of those failed for business reasons. In 2002, AT&T pulled out, and Airfone was the only in-flight phone left. By then, GTE had become Verizon, and GTE Airfone was Verizon Airfone. Far from a third of passengers, the CEO of Airfone admitted in an interview that a typical flight only saw 2-3 phone calls. Considering the minimum five-figure capital investment in each aircraft, it's hard to imagine that Airfone was profitable---even at $5 minute. Airfone more or less faded into obscurity, but not without a detour into the press via the events of 9/11. Flight 93, which crashed in Pennsylvania, was equipped with Airfone and passengers made numerous calls. Many of the events on board this aircraft were reconstructed with the assistance of Airfone records, and Claircom (the name of the operator of AT&T AirOne, which never seems to have been well marketed) also produced records related to other aircraft involved in the attacks. Most notably, flight 93 passenger Todd Beamer had a series of lengthy calls with Airfone operator Lisa Jefferson, through which he relayed many of the events taking place on the plane in real time. During these calls, Beamer seems to have coordinated the effort by passengers to retake control of the plane. The significance of Airfone and Claircom records to 9/11 investigations is such that 9/11 conspiracy theories may be one of the most enduring legacies of Claircom especially. In an odd acknowledgment of their aggressive pricing, Airfone decided not to bill for any calls made on 9/11, and temporarily introduced steep discounts (to $0.99 a minute) in the weeks after. This rather meager show of generosity did little to reverse the company's fortunes, though, and it was already well into a backslide. In 2006, the FCC auctioned the majority of Airfone's spectrum to new users. The poor utilization of Airfone was a factor in the decision, as well as Airfone's relative lack of innovation compared to newer cellular and satellite systems. In fact, a large portion of the bandwidth was purchased by Gogo, who years later would use to to deliver in-flight WiFi. Another portion went to a subsidiary of JetBlue that provided in-flight television. Verizon announced the end of Airfone in 2006, pending an acquisition by JetBlue, and while the acquisition did complete JetBlue does not seem to have continued Airfone's passenger airline service. A few years later, Gogo bought out JetBlue's communications branch, making them the new monopoly in 800MHz air ground radiotelephony. Gogo only offered telephone service for general aviation aircraft; passenger aircraft telephones had gone the way of the carphone. It's interesting to contrast the fate of Airfone to to its sibling, AGRAS. Depending on who you ask, AGRAS refers to the radio service or to the Air Ground Radiotelephone Automated Service operated by Mid-America Computer Corporation. What an incredible set of names. This was a situation a bit like ARINC, the semi-private company that for some time held a monopoly on aviation radio services. MACC had a practical monopoly on general aviation telephone service throughout the US, by operating the billing system for calls. MACC still exists today as a vendor of telecom billing software and this always seems to have been their focus---while I'm not sure, I don't believe that MACC ever operated ground stations, instead distributing rate payments to private companies that operated a handful of ground stations each. Unfortunately the history of this service is quite obscure and I'm not sure how MACC came to operate the system, but I couldn't resist the urge to mention the Mid-America Computer Corporation. AGRAS probably didn't make anyone rich, but it seems to have been generally successful. Wulfsberg FliteFones operating on the AGRAS network gave way to Gogo's business aviation phone service, itself a direct descendent of Airfone technology. The former AGRAS allocation at 450MHz somehow came under the control of a company called AURA Network Systems, which for some years has used a temporary FCC waiver of AGRAS rules to operate data services. This year, the FCC began rulemaking to formally reallocate the 450MHz air ground allocation to data services for Advanced Air Mobility, a catch-all term for UAS and air taxi services that everyone expects to radically change the airspace system in coming years. New uses of the band will include command and control for long-range UAS, clearance and collision avoidance for air taxis, and ground and air-based "see and avoid" communications for UAS. This pattern, of issuing a temporary authority to one company and later performing rulemaking to allow other companies to enter, is not unusual for the FCC but does make an interesting recurring theme in aviation radio. It's typical for no real competition to occur, the incumbent provider having been given such a big advantage. When reading about these legacy services, it's always interesting to look at the licenses. ULS has only nine licenses on record for the original 800 MHz air ground service, all expired and originally issued to Airfone (under both GTE and Verizon names), Claircom (operating company for AT&T AirOne), and Skyway Aircraft---this one an oddity, a Florida-based company that seems to have planned to introduce in-flight WiFi but not gotten all the way there. Later rulemaking to open up the 800MHz allocation to more users created a technically separate radio service with two active licenses, both held by AC BidCo. This is an intriguing mystery until you discover that AC BidCo is obviously a front company for Gogo, something they make no effort to hide---the legalities of FCC bidding processes are such that it's very common to use shell companies to hold FCC licenses, and we could speculate that AC BidCo is the Aircraft Communications Bidding Company, created by Gogo for the purpose of the 2006-2008 auctions. These two licenses are active for the former Airfone band, and Gogo reportedly continues to use some of the original Airfone ground stations. Gogo's air-ground network, which operates at 800MHz as well as in a 3GHz band allocated specifically to Gogo, was originally based on CDMA cellular technology. The ground stations were essentially cellular stations pointed upwards. It's not clear to me if this CDMA-derived system is still in use, but Gogo relies much more heavily on their Ku-band satellite network today. The 450MHz licenses are fascinating. AURA is the only company to hold current licenses, but the 246 reveal the scale of the AGRAS business. Airground of Idaho, Inc., until 1999 held a license for an AGRAS ground station on Brundage Mountain McCall, Idaho. The Arlington Telephone Company, until a 2004 cancellation, held a license for an AGRAS ground station atop their small telephone exchange in Arlington, Nebraska. AGRAS ground stations seem to have been a cottage industry, with multiple licenses to small rural telephone companies and even sole proprietorships. Some of the ground stations appear to have been the roofs of strip mall two-way radio installers. In another life, maybe I would be putting a 450MHz antenna on my roof to make a few dollars. Still, there were incumbents: numerous licenses belonged to SkyTel, which after the decline of AGRAS seems to have refocused on paging and, then, gone the same direction as most paging companies: an eternal twilight as American Messaging ("The Dependable Choice"), promoting innovation in the form of longer-range restaurant coaster pagers. In another life, I'd probably be doing that too. [1] This is probably a topic for a future article, but the Mercury Network was a computerized system that Goeken built for a company called Florist's Telegraph Delivery (FTD). It was an evolution of FTD's telegraph system that allowed a florist in one city to place an order to be delivered by by a florist in another city, thus enabling the long-distance gifting of flowers. There were multiple such networks and they had an enduring influence on the florist industry and broader business telecommunications.

a week ago 9 votes
2025-03-10 troposcatter

I have a rough list of topics for future articles, a scratchpad of two-word ideas that I sometimes struggle to interpret. Some items have been on that list for years now. Sometimes, ideas languish because I'm not really interested in them enough to devote the time. Others have the opposite problem: chapters of communications history with which I'm so fascinated that I can't decide where to start and end. They seem almost too big to take on. One of these stories starts in another vast frontier: northeastern Canada. It was a time, rather unlike our own, of relative unity between Canada and the United States. Both countries had spent the later part of World War II planning around the possibility of an Axis attack on North America, and a ragtag set of radar stations had been built to detect inbound bombers. The US had built a series of stations along the border, and the Canadians had built a few north of Ontario and Quebec to extend coverage north of those population centers. Then the war ended and, as with so many WWII projects, construction stopped. Just a few years later, the USSR demonstrated a nuclear weapon and the Cold War was on. As with so many WWII projects, freshly anxious planners declared the post-war over and blew the dust off of North American air defense plans. In 1950, US and Canadian defense leaders developed a new plan to consolidate and improve the scattershot radar early warning plan. This agreement would become the Pinetree Line, the first of three trans-Canadian radar fences jointly constructed and operated by the two nations. For the duration of the Cold War, and even to the present day, these radar installations formed the backbone of North American early warning and the locus of extensive military cooperation. The joint defense agreement between the US and Canada, solidified by the Manhattan Project's dependence on Canadian nuclear industry, grew into the 1968 establishment of the North American Air Defense Command (NORAD) as a binational joint military organization. This joint effort had to rise to many challenges. Radar had earned its place as a revolutionary military technology during the Second World War, but despite the many radar systems that had been fielded, engineer's theoretical understanding of radar and RF propagation were pretty weak. I have written here before about over-the-horizon radar, the pursuit of which significantly improved our scientific understanding of radio propagation in the atmosphere... often by experiment, rather than model. A similar progression in RF physics would also benefit radar early warning in another way: communications. One of the bigger problems with the Pinetree Line plan was the remote location of the stations. You might find that surprising; the later Mid-Canada and DEW lines were much further north and more remote. The Pinetree Line already involved stations in the far reaches of the maritime provinces, though, and to provide suitable warning to Quebec and the Great Lakes region stations were built well north of the population centers. Construction and operations would rely on aviation, but an important part of an early warning system is the ability to deliver the warning. Besides, ground-controlled interception had become the main doctrine in air defense, and it required not just an alert but real-time updates from radar stations for the most effective response. Each site on the Pinetree Line would require a reliable real-time communications capability, and as the sites were built in the 1950s, some were a very long distance from telephone lines. Canada had only gained a transcontinental telephone line in 1932, seventeen years behind the United States (which by then had three different transcontinental routes and a fourth in progress), a delay owing mostly to the formidable obstacle of the Canadian Rockies. The leaders in Canadian long-distance communications were Bell Canada and the two railways (Canadian Pacific and Canadian National), and in many cases contracts had been let to these companies to extend telephone service to radar stations. The service was very expensive, though, and the construction of telephone cables in the maritimes was effectively ruled out due to the huge distances involved and uncertainty around the technical feasibility of underwater cables to Newfoundland due to the difficult conditions and extreme tides in the Gulf of St. Lawrence. The RCAF had faced a similar problem when constructing its piecemeal radar stations in Ontario and Quebec in the 1940s, and had addressed them by applying the nascent technology of point-to-point microwave relays. This system, called ADCOM, was built and owned by RCAF to stretch 1,400 miles between a series of radar stations and other military installations. It worked, but the construction project had run far over budget (and major upgrades performed soon after blew the budget even further), and the Canadian telecom industry had vocally opposed it on the principle that purpose-built military communications systems took government investment away from public telephone infrastructure that could also serve non-military needs. These pros and cons of ADCOM must have weighed on Pinetree Line planners when they chose to build a system directly based on ADCOM, but to contract its construction and operation to Bell Canada [1]. This was, it turned out, the sort of compromise that made no one happy: the Canadian military's communications research establishment was reluctant to cede its technology to Bell Canada, while Bell Canada objected to deploying the military's system rather than one of the commercial technologies then in use across the Bell System. The distinct lack of enthusiasm on the part of both parties involved was a bad omen for the future of this Pinetree Line communications system, but as it would happen, the whole plan was overcome by events. One of the great struggles of large communications projects in that era, and even today, is the rapid rate of technological progress. One of ADCOM's faults was that the immense progress Bell Labs and Western Electric made in microwave equipment during the late '40s meant that it was obsolete as soon as it went into service. This mistake would not be repeated, as ADCOM's maritimes successor was obsoleted before it even broke ground. A promising new radio technology offered a much lower cost solution to these long, remote spans. At the onset of the Second World War, the accepted theory of radio propagation held that HF signals could pass the horizon via ground wave propagation, curving to follow the surface of the Earth, while VHF and UHF signals could not. This meant that the higher-frequency bands, where wideband signals were feasible, were limited to line-of-sight or at least near-line-of-sight links... not more than 50 miles with ideal terrain, often less. We can forgive the misconception, because this still holds true today, as a rule of thumb. The catch is in the exceptions, the nuances, that during the war were already becoming a headache to RF engineers. First, military radar operators observed mysterious contacts well beyond the theoretical line-of-sight range of their VHF radar sets. These might have been dismissed as faults in the equipment (or the operator), but reports stacked up as more long-range radar systems were fielded. After the war, relaxed restrictions and a booming economy allowed radio to proliferate. UHF television stations, separated by hundreds of miles, unexpectedly interfered with each other. AT&T, well into deployment of a transcontinental microwave network, had to adjust its frequency planning after it was found that microwave stations sometimes received interfering signals from other stations in the chain... stations well over the horizon. This was the accidental discovery of tropospheric scattering. The Earth's atmosphere is divided into five layers. We live in the troposhere, the lowest and thinnest of the layers, above which lies the stratosphere. Roughly speaking, the difference between these layers is that the troposphere becomes colder with height (due to increasing distance from the warm surface), while the stratosphere becomes warmer with height (due to decreasing shielding from the sun) [2]. In between is a local minimum of temperature, called the tropopause. The density gradients around the tropopause create a mirror effect, like the reflections you see when looking at an air-water boundary. The extensive turbulence and, well, weather present in the troposhere also refract signals on their way up and down, making the true course of radio signals reflecting off of the tropopause difficult to predict or analyze. Because of this turbulence, the effect has come to be known as scattering: radio signals sent upwards, towards the troposphere, will be scattered back downwards across a wide area. This effect is noticeable only at high frequencies, so it remained unknown until the widespread use of UHF and microwave, and was still only partially understood in the early 1950s. The locii of radar technology at the time were Bell Laboratories and the MIT Lincoln Laboratory, and they both studied this effect for possible applications. Presaging one of the repeated problems of early warning radar systems, by the time Pinetree Line construction began in 1951 the Lincoln Laboratory was already writing proposals for systems that would obsolete it. In fact, construction would begin on both of the Pinetree Line's northern replacements before the Pinetree Line itself was completed. Between rapid technological development and military planners in a sort of panic mode, the early 1950s were a very chaotic time. Underscoring the ever-changing nature of early warning was the timeline of Pinetree Line communications: as the Pinetree Line microwave network was in planning, the Lincoln Laboratory was experimenting with troposcatter communications. By the time the first stations in Newfoundland completed construction, Bell Laboratories had developed an experimental troposcatter communications system. This new means of long-range communications would not be ready in time for the first Pinetree Line stations, so parts of the original ADCOM-based microwave network would have to be built. Still, troposcatter promised to complete the rest of the network at significantly reduced cost. The US Air Force, wary of ADCOM's high costs and more detached from Canadian military politics, aggressively lobbied for the adoption of troposcatter communications for the longest and most challenging Pinetree Line links. Bell Laboratories, long a close collaborator with the Air Force, was well aware of troposcatter's potential for early warning radar. Bell Canada and Bell Laboratories agreed to evaluate the system under field conditions, and in 1952 experimental sites were installed in Newfoundland. These tests found reliable performance over 150 miles, far longer than achievable by microwave and---rather conveniently---about the distance between Pinetree Line radar stations. These results suggested that the Pinetree Line could go without an expensive communications network in the traditional sense, instead using troposcatter to link the radar stations directly to each other. Consider a comparison laid out by the Air Force: one of the most complex communications requirements for the Pinetree Line was a string of stations running not east-west like the "main" line, but north-south from St. John's, Newfoundland to Frobisher Bay, Nunavut. These stations were critical for detection of Soviet bombers approaching over the pole from the northwest, otherwise a difficult gap in radar coverage until the introduction of radar sites in Greenland. But the stations covered a span of over 1,000 miles, most of it in formidably rugged and remote arctic coastal terrain. The proposed microwave system would require 50 relay stations, almost all of which would be completely new construction. Each relay's construction would have to be preceded by the construction of a harbor or airfield for access, and then establishment of a power plant, to say nothing of the ongoing logistics of transporting fuel and personnel for maintenance. The proposed troposcatter system, on the other hand, required only ten relays. All ten would be colocated with radar stations, and could share infrastructure and logistical considerations. Despite the clear advantages of troposcatter and its selection by the USAF, the Canadian establishment remained skeptical. One cannot entirely blame them, considering that troposcatter communications had only just been demonstrated in the last year. Still, the USAF was footing most of the bill for the overall system (and paying entirely for the communications aspect, depending on how you break down the accounting) and had considerable sway. In 1954, well into construction of the radar stations (several had already been commissioned), the Bell Canada contract for communications was amended to add troposcatter relay in addition to the original microwave scheme. Despite the weaselly contracting, the writing was on the wall and progress on microwave relay stations almost stopped. By the latter part of 1954, the microwave network was abandoned entirely. Bell Canada moved at incredible speed to complete the world's first troposcatter long-distance route, code named Pole Vault. One the major downsides of troposcatter communications is its inefficiency. Only a very small portion of the RF energy reaching the tropopause is reflected, and of that, only a small portion is reflected in the right direction. Path loss from transmitter to receiver for long links is over -200 dB, compared to say -130 dB for a microwave link. That difference looks smaller than it is; dB is a logarithmic comparison and the decrease from -130 dB to -200 dB is a factor of ten million. The solution is to go big. Pole Vault's antennas, manufactured as a rush order by D. S. Kennedy Co. of Massachusetts. 36 were required, generally four per site for transmit and receive in each direction. Each antenna was a 60' aluminum parabolic dish held up on edge by truss legs. Because of the extreme weather at the coastal radar sites, the antennas were specified to operate in a 120 knot wind---or a 100 knot wind with an inch of ice buildup. These were operating requirements, so the antenna had not only to survive these winds, but to keep flexing and movements small enough to not adversely impact performance. The design of the antennas was not trivial; even after analysis by both Kennedy Co. and Bell Canada, after installation some of the rear struts supporting the antennas buckled. All high-wind locations received redesigned struts. To drive the antennas, Radio Engineering Laboratories of Long Island furnished radio sets with 10 kW of transmit power. Both of these were established companies, especially for military systems, but were still small compared to Bell System juggernauts like Western Electric. They had built the equipment for the experimental sites, though, and the timeline for construction of Pole Vault was so short that planners did not feel there was time to contract larger manufacturers. This turn of events made Kennedy Co. and REL the leading experts in troposcatter equipment, which became their key business in the following decade. The target of the contract, signed in January of 1954, was to have Pole Vault operational by the end of that same year. Winter conditions, and indeed spring and fall conditions, are not conducive to construction on the arctic coast. All of the equipment for Pole Vault had to be manufactured in the first half of the year, and as weather improved and ice cleared in the mid-summer, everything was shipped north and installation work began. Both militaries had turned down involvement in the complex and time-consuming logistics of the project, so Bell Canada chartered ships and aircraft and managed an incredibly complex schedule. To deliver equipment to sites as early as possible, icebreaker CCGS D'Iberville was chartered. C-119 and DC3 aircraft served alongside numerous small boats and airplanes. All told, it took about seven months to manufacture and deliver equipment to the Pole Vault sites, and six months to complete construction. Construction workers, representing four or five different contractors at each site and reaching about 120 workers to a site during peak activity, had to live in construction camps that could still be located miles from the station. Grounded ships, fires, frostbite, and of course poor morale lead to complications and delays. At one site, Saglek, project engineers recorded a full 24-hour day with winds continuously above 75 miles per hour, and then weeks later, a gust of 135 mph was observed. Repairs had to be made to the antennas and buildings before they were even completed. In a remarkable feat of engineering and construction, the Pole Vault system was completed and commissioned more or less on schedule: amended into the contract in January of 1954, commissioning tests of the six initial stations were successfully completed February of 1955. Four additional stations were built to complete the chain, and Pole Vault was declared fully operational December of 1956 at a cost of $24.6 million (about $290 million today). Pole Vault operated at various frequencies between 650 and 800 MHz, the wide range allowing for minimal frequency reuse---interference was fairly severe, since each station's signal scattered and could be received by stations further down the line in ideal (or as the case may be, less than ideal) conditions. Frequency division multiplexing equipment, produced by Northern Electric (Nortel) based on microwave carrier systems, offered up to 36 analog voice circuits. The carrier systems were modular, and some links initially supported only 12 circuits, while later operational requirements lead to an upgrade to 70 circuits. Over the following decades, the North Atlantic remained a critical challenge for North American air defense. It also became the primary communications barrier between the US and Canada and European NATO allies. Because Pole Vault provided connections across such a difficult span, several later military communications systems relied on Pole Vault as a backhaul connection. An inventory of the Saglek site, typical of the system, gives an idea of the scope of each of the nine primary stations. This is taken from "Special Contract," a history by former Bell Canada engineer A. G. Lester: (1) Four parabolic antennas, 60 feet in diameter, each mounted on seven mass concrete footings. (2) An equipment building 62 by 32 feet to house electronic equipment, plus a small (10 by 10 feet) diversity building. (3) A diesel building 54 by 36 feet, to house three 125 KVA (kilovolt amperes) diesel driven generators. (4) Two 2500 gallon fuel storage tanks. (5) Raceways to carry waveguide and cables. (6) Enclosed corridors interconnecting buildings, total length in this case 520 feet. Since the Pole Vault stations were colocated with radar facilities, barracks and other support facilities for the crews were already provided for. Of course, you can imagine that the overall construction effort at each site was much larger, including the radar systems as well as cantonment for personnel. Pole Vault would become a key communications system in the maritime provinces, remaining in service until 1975. Its reliable performance in such a challenging environment was a powerful proof of concept for troposcatter, a communications technique first imagined only a handful of years earlier. Even as Pole Vault reached its full operating capability in late 1956, other troposcatter systems were under construction. Much the same, and not unrelated, other radar early warning systems were under construction as well. The Pinetree Line, for all of its historical interest and its many firsts, ended as a footnote in the history of North American air defense. More sophisticated radar fences were already under design by the time Pinetree Line construction started, leaving some Pinetree stations to operate for just four years. It is a testament to Pole Vault that it outlived much of the radar system it was designed to support, becoming an integral part of not one, or even two, but at least three later radar early warning programs. Moreover, Pole Vault became a template for troposcatter systems elsewhere in Canada, in Europe, and in the United States. But we'll have to talk about those later. [1] Alexander Graham Bell was Scottish-Canadian-American, and lived for some time in rural Ontario and later Montreal. As a result, Bell Canada is barely younger than its counterpart in the United States and the early history of the two is more one of parallel development than the establishment of a foreign subsidiary. Bell's personal habit of traveling back and forth between Montreal and Boston makes the early interplay of the two companies a bit confusing. In 1955, the TAT-1 telephone cable would conquer the Atlantic ocean to link the US to Scotland via Canada, incidentally making a charming gesture to Bell's personal journey. [2] If you have studied weather a bit, you might recognize these as positive and negative lapse rates. The positive lapse rate in the troposphere is a major driver in the various phenomenon we call "weather," and the tropopause forms a natural boundary that keeps most weather within the troposphere.

a month ago 17 votes
2025-03-01 the cold glow of tritium

I have been slowly working on a book. Don't get too excited, it is on a very niche topic and I will probably eventually barely finish it and then post it here. But in the mean time, I will recount some stories which are related, but don't quite fit in. Today, we'll learn a bit about the self-illumination industry. At the turn of the 20th century, it was discovered that the newfangled element radium could be combined with a phosphor to create a paint that glowed. This was pretty much as cool as it sounds, and commercial radioluminescent paints like Undark went through periods of mass popularity. The most significant application, though, was in the military: radioluminescent paints were applied first to aircraft instruments and later to watches and gunsights. The low light output of radioluminescent paints had a tactical advantage (being very difficult to see from a distance), while the self-powering nature of radioisotopes made them very reliable. The First World War was thus the "killer app" for radioluminescence. Military demand for self-illuminating devices fed a "radium rush" that built mines, processing plants, and manufacturing operations across the country. It also fed, in a sense much too literal, the tragedy of the "Radium Girls." Several self-luminous dial manufacturers knowingly subjected their women painters to shockingly irresponsible conditions, leading inevitably to radium poisoning that disfigured, debilitated, and ultimately killed. Today, this is a fairly well-known story, a cautionary tale about the nuclear excess and labor exploitation of the 1920s. That the situation persisted into the 1940s is often omitted, perhaps too inconvenient to the narrative that a series of lawsuits, and what was essentially the invention of occupational medicine, headed off the problem in the late 1920s. What did happen after the Radium Girls? What was the fate of the luminous radium industry? A significant lull in military demand after WWI was hard on the radium business, to say nothing of a series of costly settlements to radium painters despite aggressive efforts to avoid liability. At the same time, significant radium reserves were discovered overseas, triggering a price collapse that closed most of the mines. The two largest manufacturers of radium dials, Radium Dial Company (part of Standard Chemical who owned most radium mines) and US Radium Corporation (USRC), both went through lean times. Fortunately, for them, the advent of the Second World War reignited demand for radioluminescence. The story of Radium Dial and USRC doesn't end in the 1920s---of course it doesn't, luminous paints having had a major 1970s second wind. Both companies survived, in various forms, into the current century. In this article, I will focus on the post-WWII story of radioactive self-illumination and the legacy that we live with today. During its 1920s financial difficulties, the USRC closed the Orange, New Jersey plant famously associated with Radium Girls and opened a new facility in Brooklyn. In 1948, perhaps looking to manage expenses during yet another post-war slump, USRC relocated again to Bloomsburg, Pennsylvania. The Bloomsburg facility, originally a toy factory, operated through a series of generational shifts in self-illuminating technology. The use of radium, with some occasional polonium, for radioluminescence declined in the 1950s and ended entirely in the 1970s. The alpha radiation emitted by those elements is very effective in exciting phosphors but so energetic that it damages them. A longer overall lifespan, and somewhat better safety properties, could be obtained by the use of a beta emitter like strontium or tritium. While strontium was widely used in military applications, civilian products shifted towards tritium, which offered an attractive balance of price and half life. USRC handled almost a dozen radioisotopes in Bloomsburg, much of them due to diversified operations during the 1950s that included calibration sources, ionizers, and luminous products built to various specific military requirements. The construction of a metal plating plant enabled further diversification, including foil sources used in research, but eventually became an opportunity for vertical integration. By 1968, USRC had consolidated to only tritium products, with an emphasis on clocks and watches. Radioluminescent clocks were a huge hit, in part because of their practicality, but fashion was definitely a factor. Millions of radioluminescent clocks were sold during the '60s and '70s, many of them by Westclox. Westclox started out as a typical clock company (the United Clock Company in 1885), but joined the atomic age through a long-lived partnership with the Radium Dial Company. The two companies were so close that they became physically so: Radium Dial's occupational health tragedy played out in Ottawa, Illinois, a town Radium Dial had chosen as its headquarters due to its proximity to Westclox in nearby Peru [1]. Westclox sold clocks with radioluminescent dials from the 1920s to probably the 1970s, but one of the interesting things about this corner of atomic history is just how poorly documented it is. Westclox may have switched from radium to tritium at some point, and definitely abandoned radioisotopes entirely at some point. Clock and watch collectors, a rather avid bunch, struggle to tell when. Many consumer radioisotopes are like this: it's surprisingly hard to know if they even are radioactive. Now, the Radium Dial Company itself folded entirely to a series of radium poisoning lawsuits in the 1930s. Simply being found guilty of one of the most malevolent labor abuses of the era would not stop free enterprise, though, and Radium Dial's president founded a legally distinct company called Luminous Processes just down the street. Luminous Processes is particularly notable for having continued the production of radium-based clock faces until 1978, making them the last manufacturer of commercial radioluminescent radium products. This also presents compelling circumstantial evidence that Westclox continued to use radium paint until sometime around 1978, which lines up with the general impressions of luminous dial collectors. While the late '70s were the end of Radium Dial, USRC was just beginning its corporate transformation. From 1980 to 1982, a confusing series of spinoffs and mergers lead to USR Industries, parent company of Metreal, parent company of Safety Light Corporation, which manufactured products to be marketed and distributed by Isolite. All of these companies were ultimately part of USR Industries, the former USRC, but the org chart sure did get more complex. The Nuclear Regulatory Commission expressed some irritation in their observation, decades later, that they weren't told about any of this restructuring until they noticed it on their own. Safety Light, as expressed by the name, focused on a new application for tritium radioluminescence: safety signage, mostly self-powered illuminated exit signs and evacuation signage for aircraft. Safety Light continued to manufacture tritium exit signs until 2007, when they shut down following some tough interactions with the NRC and the EPA. They had been, in the fashion typical of early nuclear industry, disposing of their waste by putting it in a hole in the ground. They had persisted in doing this much longer than was socially acceptable, and ultimately seem to have been bankrupted by their environmental obligations... obligations which then had to be assumed by the Superfund program. The specific form of illumination used in these exit signs, and by far the most common type of radioluminescence today, is the Gaseous Tritium Light Source or GTLS. GTLS are small glass tubes or vials, usually made with borosilicate glass, containing tritium gas and an internal coating of phosphor. GTLS are simple, robust, and due to the very small amount of tritium required, fairly inexpensive. They can be made large enough to illuminate a letter in an exit sign, or small enough to be embedded into a watch hand. Major applications include watch faces, gun sights, and the keychains of "EDC" enthusiasts. Plenty of GTLS manufacturers have come and gone over the years. In the UK, defense contractor Saunders-Roe got into the GTLS business during WWII. Their GTLS product line moved to Brandhurst Inc., which had a major American subsidiary. It is an interesting observation that the US always seems to have been the biggest market for GTLS, but their manufacture has increasingly shifted overseas. Brandhurst is no longer even British, having gone the way of so much of the nuclear world by becoming Canadian. A merger with Canadian company SRB created SRB Technologies in Pembroke, Ontario, which continues to manufacture GTLS today. Other Canadian GTLS manufacturers have not fared as well. Shield Source Inc., of Peterborough, Ontario, began filling GTLS vials in 1987. I can't find a whole lot of information on Shield Source's early days, but they seem to have mostly made tubes for exit signs, and perhaps some other self-powered signage. In 2012, the Canadian Nuclear Safety Commission (CNSC) detected a discrepancy in Shield Source's tritium emissions monitoring. I am not sure of the exact details, because CNSC seems to make less information public in general than the US NRC [2]. Here's what appears to have happened: tritium is a gas, which makes it tricky to safely handle. Fortunately, the activity of tritium is relatively low and its half life is relatively short. This means that it's acceptable to manage everyday leakage (for example when connecting and disconnecting things) in a tritium workspace by ventilating it to a stack, releasing it to the atmosphere for dilution and decay. The license of a tritium facility will specify a limit for how much radioactivity can be released this way, and monitoring systems (usually several layers of monitoring systems) have to be used to ensure that the permit limit is not exceeded. In the case of Shield Source, some kind of configuration error with the tritium ventilation monitoring system combined with a failure to adequately test and audit it. The CNSC discovered that during 2010 and 2011, the facility had undercounted their tritium emissions, and in fact exceeded the limits of their license. Air samplers located around the facility, some of which were also validated by an independent laboratory, did not detect tritium in excess of the environmental limits. This suggests that the excess releases probably did not have an adverse impact on human health or the environment. Still, exceeding license terms and then failing to report and correct the problem for two years is a very serious failure by a licensee. In 2012, when the problem was discovered, CNSC ordered Shield Source's license modified to prohibit actual tritium handling. This can seem like an odd maneuver but something similar can happen in the US. Just having radioisotope-contaminated equipment, storing test sources, and managing radioactive waste requires a license. By modifying Shield Source's license to prohibit tritium vial filling, the CNSC effectively shut the plant down while allowing Shield Source to continue their radiological protection and waste management functions. This is the same reason that long-defunct radiological facilities often still hold licenses from NRC in the US: they retain the licenses to allow them to store and process waste and contaminated materials during decommissioning. In the case of Shield Source, while the violation was serious, CNSC does not seem to have anticipated a permanent shutdown. The terms agreed in 2012 were that Shield Source could regain a license to manufacture GTLS if it produced for CNSC a satisfactory report on the root cause of the failure and actions taken to prevent a recurrence. Shield Source did produce such a report, and CNSC seems to have mostly accepted it with some comments requesting further work (the actual report does not appear to be public). Still, in early 2013, Shield Source informed CNSC that it did not intend to resume manufacturing. The license was converted to a one-year license to facilitate decommissioning. Tritium filling and ventilation equipment, which had been contaminated by long-term exposure to tritium, was "packaged" and disposed. This typically consists of breaking things down into parts small enough to fit into 55-gallon drums, "overpacking" those drums into 65-gallon drums for extra protection, and then coordinating with transportation authorities to ship the materials in a suitable way to a facility licensed to dispose of them. This is mostly done by burying them in the ground in an area where the geology makes groundwater interaction exceedingly unlikely, like a certain landfill on the Texas-New Mexico border near Eunice. Keep in mind that tritium's short half life means this is not a long-term geological repository situation; the waste needs to be safely contained for only, say, fifty years to get down to levels not much different from background. I don't know where the Shield Source waste went, CNSC only says it went to a licensed facility. Once the contaminated equipment was removed, drywall and ceiling and floor finishes were removed in the tritium handling area and everything left was thoroughly cleaned. A survey confirmed that remaining tritium contamination was below CNSC-determined limits (for example, in-air concentrations that would lead to a dose of less than 0.01 mSv/year for 9-5 occupational exposure). At that point, the Shield Source building was released to the landlord they had leased it from, presumably to be occupied by some other company. Fortunately tritium cleanup isn't all that complex. You might wonder why Shield Source abruptly closed down. I assume there was some back-and-forth with CNSC before they decided to throw in the towel, but it is kind of odd that they folded entirely during the response to an incident that CNSC seems to have fully expected them to survive. I suspect that a full year of lost revenue was just too much for Shield Source: by 2012, when all of this was playing out, the radioluminescence market had seriously declined. There are a lot of reasons. For one, the regulatory approach to tritium has become more and more strict over time. Radium is entirely prohibited in consumer goods, and the limit on tritium activity is very low. Even self-illuminating exit signs now require NRC oversight in the US, discussed shortly. Besides, public sentiment has increasingly turned against the Friendly Atom is consumer contexts, and you can imagine that people are especially sensitive to the use of tritium in classic institutional contexts for self-powered exit signs: schools and healthcare facilities. At the same time, alternatives have emerged. Non-radioactive luminescent materials, the kinds of things we tend to call "glow in the dark," have greatly improved since WWII. Strontium aluminate is a typical choice today---the inclusion of strontium might suggest otherwise, but strontium aluminate uses the stable natural isotope of strontium, Sr-88, and is not radioactive. Strontium aluminate has mostly displaced radioluminescence in safety applications, and for example the FAA has long allowed it for safety signage and path illumination on aircraft. Keep in mind that these luminescent materials are not self-powered. They must be "charged" by exposure to light. Minor adaptations are required, for example a requirement that the cabin lights in airliners be turned on for a certain period of time before takeoff, but in practice these limitations are considered preferable to the complexity and risks involved in the use of radioisotopes. You are probably already thinking that improving electronics have also made radioluminescence less relevant. Compact, cool-running, energy-efficient LEDs and a wide variety of packages and form factors mean that a lot of traditional applications of radioluminescence are now simply electric. Here's just a small example: in the early days of LCD digital watches, it was not unusual for higher-end models to use a radioluminescent source as a backlight. Today that's just nonsensical, a digital watch needs a power source anyway and in even the cheapest Casios a single LED offers a reasonable alternative. Radioluminescent digital watches were very short lived. Now that we've learned about a few historic radioluminescent manufacturers, you might have a couple of questions. Where were the radioisotopes actually sourced? And why does Ontario come up twice? These are related. From the 1910s to the 1950s, radioluminescent products were mostly using radium sourced from Standard Chemical, who extracted it from mines in the Southwest. The domestic radium mining industry collapsed by 1955 due to a combination of factors: declining demand after WWII, cheaper radium imported from Brazil, and a broadly changing attitude towards radium that lead the NRC to note in the '90s that we might never again find the need to extract radium: radium has a very long half life that makes it considerably more difficult to manage than strontium or tritium. Today, you could say that the price of radium has gone negative, in that you are far more likely to pay an environmental management company to take it away (at rather high prices) than to buy more. But what about tritium? Tritium is not really naturally occurring; there technically is some natural tritium but it's at extremely low concentrations and very hard to get at. But, as it happens, irradiating water produces a bit of tritium, and nuclear reactors incidentally irradiate a lot of water. With suitable modifications, the tritium produced as a byproduct of civilian reactors can be concentrated and sold. Ontario Hydro has long had facilities to perform this extraction, and recently built a new plant at the Darlington Nuclear Station that processes heavy water shipped from CANDU reactors throughout Ontario. The primary purpose of this plant is to reduce environmental exposure from the release of "tritiated" heavy water; it produces more tritium than can reasonably be sold, so much of it is stored for decay. The result is that tritium is fairly abundant and cheap in Ontario. Besides SRB Technologies which packages tritium from Ontario Hydro into GTLS, another major manufacturer of GTLS is the Swiss company mb-microtec. mb-microtec is the parent of watch brand Traser and GTLS brand Trigalight, and seem to be one of the largest sources of consumer GTLS overall. Many of the tritium keychains you can buy, for example, use tritium vials manufactured by mb-microtec. NRC documents suggest that mb-microtec contracts a lot of their finished product manufacturing to a company in Hong Kong and that some of the finished products you see using their GTLS (like watches and fobs) are in fact white-labeled from that plant, but unfortunately don't make the original source of the tritium clear. mb-microtec has the distinction of operating the only recycling plant for tritium gas, and press releases surrounding the new recycling operation say they purchase the rest of their tritium supply. I assume from the civilian nuclear power industry in Switzerland, which has several major reactors operating. A number of other manufacturers produce GTLS primarily for military applications, with some safety signage side business. And then there is, of course, the nuclear weapons program, which consumes the largest volume of tritium in the US. The US's tritium production facility for much of the Cold War actually shut down in 1988, one of the factors in most GTLS manufacturers being overseas. In the interim period, the sole domestic tritium supply was recycling of tritium in dismantled weapons and other surplus equipment. Since tritium has such a short half-life, this situation cannot persist indefinitely, and tritium production was resumed in 2004 at the Tennessee Valley Authority's Watts Bar nuclear generating station. Tritium extracted from that plant is currently used solely by the Department of Energy, primarily for the weapons program. Finally, let's discuss the modern state of radioluminescence. GTLS, based on tritium, are the only type of radioluminescence available to consumers. All importation and distribution of GTLS requires an NRC license, although companies that only distribute products that have been manufactured and tested by another licensee fall under a license exemption category that still requires NRC reporting but greatly simplifies the process. Consumers that purchase these items have no obligations to the NRC. Major categories of devices under these rules include smoke detectors, detection instruments and small calibration sources, and self-luminous products using tritium, krypton, or promethium. You might wonder, "how big of a device can a I buy under these rules?" The answer to that question is a bit complicated, so let me explain my understanding of the rules using a specific example. Let's say you buy a GTLS keychain from massdrop or wherever people get EDC baubles these days [3]. The business you ordered it from almost certainly did not make it, and is acting as an NRC exempt distributor of a product. In NRC terms, your purchase of the product is not the "initial sale or distribution," that already happened when the company you got it from ordered it from their supplier. Their supplier, or possibly someone further up in the chain, does need to hold a license: an NRC specific license is required to manufacture, process, produce, or initially transfer or sell tritium products. This is the reason that overseas companies like SRB and mb-microtec hold NRC licenses; this is the only way for consumers to legally receive their products. It is important to note the word "specific" in "NRC specific license." These licenses are very specific; the NRC approves each individual product including the design of the containment and and labeling. When a license is issued, the individual products are added to a registry maintained by the NRC. When evaluating license applications, the NRC considers a set of safety objectives rather than specific criteria. For example, and if you want to read along we're in 10 CFR 32.23: In normal use and disposal of a single exempt unit, it is unlikely that the external radiation dose in any one year, or the dose commitment resulting from the intake of radioactive material in any one year, to a suitable sample of the group of individuals expected to be most highly exposed to radiation or radioactive material from the product will exceed the dose to the appropriate organ as specified in Column I of the table in § 32.24 of this part. So the rules are a bit soft, in that a licensee can argue back and forth with the NRC over means of calculating dose risk and so on. It is, ultimately, the NRC's discretion as to whether or not a device complies. It's surprisingly hard to track down original licensing paperwork for these products because of how frequently they are rebranded, and resellers never seem to provide detailed specifications. I suspect this is intentional, as I've found some cases of NRC applications that request trade secret confidentiality on details. Still, from the license paperwork I've found with hard numbers, it seems like manufacturers keep the total activity of GTLS products (e.g. a single GTLS sold alone, or the total of the GTLS in a watch) under 25 millicurie. There do exist larger devices, of which exit signs are the largest category. Self-powered exit signs are also manufactured under NRC specific licenses, but their activity and resulting risk is too high to qualify for exemption at the distribution and use stage. Instead, all users of self-powered safety signs do so under a general license issued by the NRC (a general license meaning that it is implicitly issued to all such users). The general license is found in 10 CFR 31. Owners of tritium exit signs are required to designate a person to track and maintain the signs, inform the NRC of that person's contact information and any changes in that person, to inform the NRC of any lost, stolen, or damaged signs. General licensees are not allowed to sell or otherwise transfer tritium signs, unless they are remaining in the same location (e.g. when a building is sold), in which case they must notify the NRC and disclose NRC requirements to the transferee. When tritium exit signs reach the end of their lifespan, they must be disposed of by transfer to an NRC license holder who can recycle them. The general licensee has to notify the NRC of that transfer. Overall, the intent of the general license regulations is to ensure that they are properly disposed of: reporting transfers and events to the NRC, along with serial numbers, allows the NRC to audit for signs that have "disappeared." Missing tritium exit signs are a common source of NRC event reports. It should also be said that, partly for these reasons, tritium exit signs are pretty expensive. Roughly $300 for a new one, and $150 to dispose of an old one. Other radioluminescent devices you will find are mostly antiques. Radium dials are reasonably common, anything with a luminescent dial made before, say, 1960 is probably radium, and specifically Westclox products to 1978 likely use radium. The half-life of radium-226 is 1,600 years, so these radium dials have the distinction of often still working, although the paints have usually held up more poorly than the isotopes they contain. These items should be handled with caution, since the failure of the paint creates the possibility of inhaling or ingesting radium. They also emit radon as a decay product, which becomes hazardous in confined spaces, so radium dials should be stored in a well-ventilated environment. Strontium-90 has a half-life of 29 years, and tritium 12 years, so vintage radioluminescent products using either have usually decayed to the extent that they no longer shine brightly or even at all. The phosphors used for these products will usually still fluoresce brightly under UV light and might even photoluminesce for a time after light exposure, but they will no longer stay lit in a dark environment. Fortunately, the decay that makes them not work also makes them much safer to handle. Tritium decays to helium-3 which is quite safe, strontium-90 to yttrium-90 which quickly decays to zirconium-90. Zirconium-90 is stable and only about as toxic as any other heavy metal. You can see why these radioisotopes are now much preferred over radium. And that's the modern story of radioluminescence. Sometime soon, probably tomorrow, I will be sending out my supporter's newsletter, EYES ONLY, with some more detail on environmental remediation at historic processing facilities for radioluminescent products. You can learn a bit more about how US Radium was putting their waste in a hole in the ground, and also into a river, and sort of wherever else. You know Radium Dial Company was up to similar abuses. [1] The assertion that Ottawa is conveniently close to Peru is one of those oddities of naming places after bigger, more famous places. [2] CNSC's whole final report on Shield Source is only 25 pages. A similar decommissioning process in the US would produce thousands of pages of public record typically culminating in EPA Five Year Reviews which would be, themselves, perhaps a hundred pages depending on the amount of post-closure monitoring. I'm not familiar with the actual law but it seems like most of the difference is that CNSC does not normally publish technical documentation or original data (although one document does suggest that original data is available on request). It's an interesting difference... the 25-page report, really only 20 pages after front matter, is a lot more approachable for the public than a 400 page set of close-out reports. Much of the standard documentation in the US comes from NEPA requirements, and NEPA is infamous in some circles for requiring exhaustive reports that don't necessarily do anything useful. But from my perspective it is weird for the formal, published documentation on closure of a radiological site to not include hydrology discussion, demographics, maps, and fifty pages of data tables as appendices. Ideally a bunch of one-sentence acceptance emails stapled to the end for good measure. When it comes to describing the actual problem, CNSC only gives you a couple of paragraphs of background. [3] Really channeling Guy Debord with my contempt for keychains here. during the writing of this article, I bought myself a tritium EDC bauble, so we're all in the mud together.

a month ago 19 votes
2025-02-17 of psychics and securities

September 6th, 1996. Eddie Murray, of the Baltimore Orioles, is at bat. He has had 20 home runs in the season; 499 in his career. Anticipation for the 500th had been building for the last week. It would make Murray only the third player to reach 500 home runs and 3000 hits. His career RBI would land in the top ten hitters in the history of the sport; his 500th home run was a statistical inevitability. Less foreseeable was the ball's glancing path through one of the most famous stories of the telephone business. Statistics only tell you what might happen. Michael Lasky had made a career, a very lucrative one, of telling people what would happen. Lasky would have that ball. As usual, he made it happen by advertising. Clearing right field, the ball landed in the hands of Dan Jones, a salesman from Towson, Maryland. Despite his vocation, he didn't immediately view his spectacular catch in financial terms. He told a newspaper reporter that he looked forward to meeting Murray, getting some signatures, some memorabilia. Instead, he got offers. At least three parties inquired about purchasing the ball, but the biggest offer came far from discreetly: an ad in the Baltimore Sun offering half a million dollars to whoever had it. Well, the offer was actually for a $25,000 annuity for 20 years, with a notional cash value of half a million but a time-adjusted value of $300,000 or less. I couldn't tell for sure, but given events that would follow, it seems unlikely that Jones ever received more than a few of the payments anyway. Still, the half a million made headlines, and NPV or not the sale price still set the record for a public sale of sports memorabilia. Lasky handled his new purchase with his signature sense of showmanship. He held a vote, a telephone vote: two 1-900 numbers, charging $0.95 a call, allowed the public to weigh in on whether he should donate the ball to the Babe Ruth Birthplace museum or display it in the swanky waterfront hotel he part-owned. The proceeds went to charity, and after the museum won the poll, the ball did too. The whole thing was a bit of a publicity stunt, Lasky thrived on unsubtle displays and he could part with the money. His 1-900 numbers were bringing in over $100 million a year. Lasky's biography is obscure. Born 1942 in Brooklyn, he moved to Baltimore in the 1960s for some reason connected to a conspicuous family business: a blood bank. Perhaps the blood bank was a grift, it's hard to say now, but Lasky certainly had a unique eye for business. He was fond of horse racing, or really, of trackside betting. His father, a postal worker, had a proprietary theory of mathematics that he applied to predicting the outcome of the race. This art, or science, or sham, is called handicapping, and it became Lasky's first real success. Under the pseudonym Mike Warren, he published the Baltimore Bulletin, a handicapping newsletter advertising sure bets at all the region's racetracks. Well, there were some little details of this business, some conflicts of interest, a little infringement on the trademark of the Preakness. The details are neither clear nor important, but he had some trouble with the racing commissions in at least three states. He probably wouldn't have tangled with them at all if he weren't stubbornly trying to hold down a license to breed racehorses while also running a betting cartel, but Lasky was always driven more by passion than reason. Besides, he had other things going. Predicting the future in print was sort of an industry vertical, and he diversified. His mail-order astrology operation did well, before the Postal Service shut it down. He ran some sort of sports pager service, probably tied to betting, and I don't know what came of that. Perhaps on the back of a new year's resolution, he ran a health club, although it collapsed in 1985 with a bankruptcy case that revealed some, well, questionable practices. Strange that a health club just weeks away from bankruptcy would sell so many multi-year memberships, paid up front. And where did that money go, anyway? No matter, Lasky was onto the next thing. During the 1980s, changes had occurred that would grow Lasky's future-predicting portfolio into a staple of American media. First, in 1984, a Reagan-era FCC voted to end most regulation of television advertising. Gone was the limit of 16 minutes per hour of paid programming. An advertiser could now book entire half-hour schedule slots. Second, during the early '80s AT&T standardized and promoted a new model in telephone billing. The premium-rate number, often called a "1-900 number" after the NPA assigned for their use, incurred a callee-determined per-minute toll that the telco collected and paid on to the callee. It's a bit like a nascent version of "Web 3.0": telephone microtransactions, an innovative new way to pay for information services. It seems like a fair assumption that handicapping brought Lasky to the 1-900 racket, and he certainly did offer betting tip lines. But he had learned a thing or two from the astrology business, even if it ran afoul of Big Postal. Handicapping involved a surprising amount of work, and its marketing centered around the supposedly unique insight of the handicapper. Fixed recordings of advice could only keep people on a telephone line for so long, anyway. Astrology, though, involved even fewer facts, and even more opportunity to ramble. Best of all, there was an established industry of small-time psychics working out of their homes. With the magic of the telephone, every one of them could offer intuitive readings to all of America, for just $3.99 a minute. In 1990, Lasky's new "direct response marketing" company Inphomation, Inc. contracted five-time Grammy winner Dionne Warwick, celebrity psychic Linda Georgian, and a studio audience to produce a 30 minute talk-show "infomercial" promoting the Psychic Friends Network. Over the next few years, Inphomation conjoined with an ad booking agency and a video production company under the ownership of Mike Lasky's son, Marc Lasky. Inphomation spent as much as a million a week in television bookings, promoting a knitting machine and a fishing lure and sports tips, but most of all psychics. The original half-hour Psychic Friends Network spot is often regarded as the most successful infomercial in history. It remade Warwick's reputation, turning her from a singer to a psychic promoter. Calls to PFN's 1-900 number, charged at various rates that could reach over $200 an hour, brought in $140 million in revenue in its peak years of the mid 1990s. Lasky described PFN as an innovative new business model, but it's one we now easily recognize as "gig work." Telephone psychics, recruited mostly by referral from the existing network, worked from home, answering calls on their own telephones. Some read Tarot, some gazed into crystals, others did nothing at all, but the important thing was that they kept callers on the line. After the phone company's cut and Inphomation's cut, they were paid a share of the per-minute rate that automatically appeared on caller's monthly phone bills. A lot of people, and even some articles written in the last decade, link the Psychic Friends Network to "Miss Cleo." There's sort of a "Berenstain Bears" effect happening here; as widely as we might remember Miss Cleo's PFN appearances there are no such thing. Miss Cleo was actually the head psychic and spokeswoman of the Psychic Reader's Network, which would be called a competitor to the Psychic Friends Network except that they didn't operate at the same time. In the early '00s, the Psychic Reader's Network collapsed in scandal. The limitations of its business model, a straightforward con, eventually caught up. It was sued out of business by a dozen states, then the FTC, then the FCC just for good measure. The era of the 1-900 number was actually rather short. By the late '80s, it had already become clear that the main application of premium rate calling was not stock quotations or paid tech support or referral services. It was scams. An extremely common genre of premium rate number, almost the lifeblood of the industry, were joke lines that offered telephonic entertainment in the voice of cartoon characters. Advertisements for these numbers, run during morning cartoons, advised children to call right away. Their parents wouldn't find out until the end of the month, when the phone bill came and those jokes turned out to have run $50 in 1983's currency. Telephone companies were at first called complicit in the grift, but eventually bowed to pressure and, in 1987, made it possible for consumers to block 1-900 calling on their phone service. Of course, few telephone customers took advantage, and the children's joke line racket went on into the early '90s when a series of FTC lawsuits finally scared most of them off the telephone network. Adult entertainment was another touchstone of the industry, although adult lines did not last as long on 1-900 numbers as we often remember. Ripping off adults via their children is one thing; smut is a vice. AT&T and MCI, the dominant long distance carriers and thus the companies that handled most 1-900 call volume, largely cut off phone sex lines by 1991. Congress passed a law requiring telephone carriers to block them by default anyway, but of course left other 1-900 services as is. Phone sex lines were far from gone, of course, but they had to find more nuanced ways to make their revenue: international rates and complicit telephone carriers, dial-around long distance revenue, and whatever else they could think of that regulators hadn't caught up to yet. When Miss Cleo and her Psychic Reader's Network launched in 1997, psychics were still an "above board" use of the 1-900 number. The Psychic Readers lived to see the end of that era. In the late '90s, regulations changed to make unpaid 1-900 bills more difficult to collect. By 2001, some telephone carriers had dropped psychic lines from their networks as a business decision. The bill disputes simply weren't worth the hassle. In 2002, AT&T ended 1-900 billing entirely. Other carriers maintained premium-rate billing for a decade later, but AT&T had most of the customer volume anyway. The Psychic Friends Network, blessed by better vision, struck at the right time. 1990 to 1997 were the golden age of 1-900 and the golden age of Inphomation. Inphomation's three-story office building in Baltimore had a conference room with a hand-painted ceiling fresco of cherubs and clouds. In the marble-lined lobby, a wall of 25 televisions played Inphomation infomercials on repeat. At its peak, the Psychic Friends Network routed calls to 2,000 independent psychic contractors. Dionne Warwick and Linda Georgian were famous television personalities; Warwick wasn't entirely happy about her association with the brand but she made royalties whenever the infomercial aired. Some customers spent tens of thousands of dollars on psychic advice. In 1993, a direct response marketing firm called Regal Communications made a deal to buy Inphomation. The deal went through, but just the next year Regal spun their entire 1-900 division off, and Inphomation exercised an option to become an independent company once again. A decade later, many of Regal's executives would face SEC charges over the details of Regal's 1-900 business, foreshadowing a common tendency of Psychic Friends Network owners. The psychic business, it turns out, was not so unlike the handicapping business. Both were unsavory. Both made most of their money off of addicts. In the press, Lasky talked about casual fans that called for two minutes here and there. What's $5 for a little fun? You might even get some good advice. Lawsuits, regulatory action, and newspaper articles told a different story. The "30 free minutes" promotion used to attract new customers only covered the first two minutes of each call, the rest were billed at an aggressive rate. The most important customers stayed on the line for hours. Callers had to sit through a few minutes of recordings, charged at the full rate, before being connected to a psychic who drew out the conversation by speaking slowly and asking inane questions. Some psychics seem to have approached their job rather sincerely, but others apparently read scripts. And just like the horse track, the whole thing moved a lot of money. Lasky continued to tussle with racing commissions over his thoroughbred horses. He bought a Mercedes, a yacht, a luxury condo, a luxury hotel whose presidential suite he used as an apartment, a half-million-dollar baseball. Well, a $300,000 baseball, at least. Eventually, the odds turned against Lasky. Miss Cleo's Psychic Reader's Network was just one of the many PFN lookalikes that popped up in the late '90s. There was a vacuum to fill, because in 1997, Inphomation was descending into bankruptcy. Opinions differ on Lasky's management and leadership. He was a visionary at least once, but later decisions were more variable. Bringing infomercial production in-house through his son's Pikesville Pictures might have improved creative control, but production budgets ballooned and projects ran late. PFN was still running mainly off of the Dionne Warwick shows, which were feeling dated, especially after a memorable 1993 Saturday Night Live parody featuring Christopher Walken. Lasky's idea for a radio show, the Psychic Friends Radio Network, had a promising trial run but then faltered on launch. Hardly a half dozen radio stations picked it up, and it lost Inphomation tens of millions of dollars. While they were years ahead of the telephone industry cracking down on psychics, PFN still struggled with a timeless trouble of the telephone network: billing. AT&T had a long-established practice of withholding a portion of 1-900 revenue for chargebacks. Some customers see the extra charges on their phone bills and call in with complaints; the telephone company, not really being the beneficiary of the revenue anyway, was not willing to go to much trouble to keep it and often agreed to a refund. Holding, say, 10% of a callee's 1-900 billings in reserve allowed AT&T to offer these refunds without taking a loss. The psychic industry, it turned out, was especially prone to end-of-month customer dissatisfaction. Chargebacks were so frequent that AT&T raised Inphomation's withholding to 20%, 30%, and even 40% of revenue. At least, that's how AT&T told it. Lasky always seemed skeptical, alleging that the telephone companies were simply refusing to hand over money Inphomation was owed, making themselves a free loan. Inphomation brokered a deal to move their business elsewhere, signing an exclusive contract with MCI. MCI underdelivered: they withheld just as much revenue, in violation of the contract according to Lasky, and besides the MCI numbers suffered from poor quality and dropped calls. At least, that's how Inphomation told it. Maybe the dropped calls were on Inphomation's end, and maybe they had a convenient positive effect on revenue as callers paid for a few minute of recordings before being connected to no one at all. By the time the Psychic Friends Network fell apart, there was a lot of blame passed around. Lasky would eventually prevail in a lawsuit against MCI for unpaid revenue, but not until too late. By some combination of a lack of innovation in their product, largely unchanged since 1991, and increasing expenses for both advertising and its founder's lifestyle, Inphomation ended 1997 over $20 million in the red. In 1998 they filed for Chapter 11, and Lasky sought to reorganize the company as debtor-in-possession. The bankruptcy case brought out some stories of Lasky's personal behavior. While some employees stood by him as a talented salesman and apt finder of opportunities, others had filed assault charges. Those charges were later dropped, but by many accounts, he had quite a temper. Lasky's habit of not just carrying but brandishing a handgun around the office certainly raised eyebrows. Besides, his expensive lifestyle persisted much too far into Inphomation's decline. The bankruptcy judge's doubts about Lasky reached a head when it was revealed that he had tried to hide the company's assets. Much of the infrastructure and intellectual property of the Psychic Friends Network, and no small amount of cash, had been transferred to the newly formed Friends of Friends LLC in the weeks before bankruptcy. The judge also noticed some irregularities. The company controller had been sworn in as treasurer, signed the bankruptcy petition, and then resigned as treasurer in the space of a few days. When asked why the company chose this odd maneuver over simply having Lasky, corporate president, sign the papers, Lasky had trouble recalling the whole thing. He also had trouble recalling loans Inphomation had taken, meetings he had scheduled, and actions he had taken. When asked about Inphomation's board of directors, Lasky didn't know who they were, or when they had last met. The judge used harsh language. "I've seen nothing but evidence of concealment, dishonesty, and less than full disclosure... I have no hope this debtor can reorganize with the present management." Lasky was removed, and a receiver appointed to manage Inphomation through a reorganization that quietly turned into a liquidation. And that was almost the end of the Psychic Friends Network. The bankruptcy is sometimes attributed to Lasky's failure to adapt to the times, but PFN wasn't entirely without innovation. The Psychic Friends Network first went online, at psychicfriendsnetwork.com, in 1997. This website, launched in the company's final days, offered not only the PFN's 1-900 number but a landing page for a telephone-based version of "Colorgenics." Colorgenics was a personality test based on the "Lüscher color test," an assessment designed by a Swiss psychotherapist based on nothing in particular. There are dozens of colorgenics tests online today, many of which make various attempts to extract money from the user, but none with quite the verve of a color quiz via 1-900 number. Inphomation just didn't quite make it in the internet age, or at least not directly. Most people know 1998 as the end of the Psychic Friends Network. The Dionne Warwick infomercials were gone, and that was most of PFN anyway. Without Linda Georgian, could PFN live on? Yes, it turns out, but not in its living form. The 1998 bankruptcy marked PFN's transition from a scam to the specter of a scam, and then to a whole different kind of scam. It was the beginning of the PFN's zombie years. In 1999, Inphomation's assets were liquidated at auction for $1.85 million, a far cry from the company's mid-'90s valuations in the hundreds of millions. The buyer: Marc Lasky, Michael Lasky's son. PFN assets became part of PFN Holdings Inc., with Michael Lasky and Marc Lasky as officers. PFN was back. It does seem that the Laskys made a brief second crack at a 1-900 business, but by 1999 the tide was clearly against expensive psychic hotlines. Telephone companies had started their crackdown, and attorney general lawsuits were brewing. Besides, after the buyout PFN Holdings didn't have much capital, and doesn't seem to have done much in the way of marketing. It's obscure what happened in these years, but I think the Laskys licensed out the PFN name. psychicfriendsnetwork.com, from 2002 to around 2009, directed visitors to Keen. Keen was the Inphomation of the internet age, what Inphomation probably would have been if they had run their finances a little better in '97. Backed by $60 million in venture funding from names like Microsoft and eBay, Keen was a classic dotcom startup. They launched in '99 with the ambitious and original idea of operating a web directory and reference library. Like most of the seemingly endless number of reference website startups, they had to pivot to something else. Unlike most of the others, Keen and their investors had a relaxed set of moral strictures about the company's new direction. In the early 2000s, keen.com was squarely in the ethical swamp that had been so well explored by the 1-900 business. Their web directory specialized in phone sex and psychic advice---all offered by 1-800 numbers with convenient credit card payment, a new twist on the premium phone line model that bypassed the vagaries and regulations of telephone billing. Keen is, incidentally, still around today. They'll broker a call or chat with empath/medium Citrine Angel, offering both angel readings and clairsentience, just $1 for the first 5 minutes and $2.99 a minute thereafter. That's actually a pretty good deal compared to the Psychic Friends Network's old rates. Keen's parent company, Ingenio, runs a half dozen psychic advice websites and a habit tracking app. But it says something about the viability of online psychics that Keen still seems to do most of their business via phone. Maybe the internet is not as much of a blessing for psychics as it seems, or maybe they just haven't found quite the right business model. The Laskys enjoyed a windfall during PFN's 2000s dormancy. In 2004, the Inphomation bankruptcy estate settled its lawsuit against bankrupt MCI for withholding payments. The Laskys made $4 million. It's hard to say where that money went, maybe to backing Marc's Pikesville Pictures production company. Pikesville picked up odd jobs producing television commercials, promotional documentaries, and an extremely strange educational film intended to prepare children to testify in court. I only know about this because parts of it appear in the video "Marc Lasky Demo Reel," uploaded to YouTube by "Mike Warren," the old horse race handicapping pseudonym of Michael Lasky. It has 167 views, and a single comment, "my dad made this." That was Gabriela Lasky, Marc's daughter. It's funny how much of modern life plays out on YouTube, where Marc's own account uploaded the full run of PFN infomercials. Some of that $4 million in MCI money might have gone into the Psychic Friends Networks' reboot. In 2009, Marc Lasky produced a new series of television commercials for PFN. "The legendary Psychic Friends Network is back, bigger and bolder than ever." An extremely catchy jingle goes "all new, all improved, all knowing: call the Psychic Friends Network." On PFN 2.0, you can access your favorite psychic whenever you wish, on your laptop, computer, on your mobile, or on your tablet. These were decidedly modernized, directing viewers to text a keyword to an SMS shortcode or visit psychicfriendsnetwork.com, where they could set up real-time video consultations with PFN's network of advisors. Some referred to "newpfn.com" instead, perhaps because it was easier to type, or perhaps there was some dispute around the Keen deal. There were still echoes of the original 1990s formula. The younger Lasky seemed to be hunting for a new celebrity lead like Warwick, but having trouble finding one. Actress Vivica A. Fox appeared in one spot, but then sent a cease and desist and went to the press alleging that her likeness was used without her permission. Well, they got her to record the lines somehow, but maybe they never paid. Maybe she found out about PFN's troubled reputation after the shoot. In any case, Lasky went hunting again and landed on Puerto Rican astrologer and television personality Walter Mercado. Mercado, coming off something like Liberace if he was a Spanish-language TV host, sells the Psychic Friends Network to a Latin beat and does a hell of a job of it. He was a recognizable face in the Latin-American media due to his astrology show, syndicated for many years by Univision, and he appears in a sparkling outfit that lets him deliver the line "the legend is back" with far more credibility than anyone else in the new PFN spots. He was no Dionne Warwick, though, and the 2009 PFN revival sorely lacked the production quality or charm of the '90s infomercial. It seems to have had little impact; this iteration of PFN is so obscure that many histories of the company are completely unaware of it. Elsewhere, in Nevada, an enigmatic figure named Ya Tao Chang had incorporated Web Wizards Inc. I can tell you almost nothing about this; Chang is impossible to research and Web Wizards left no footprints. All I know is that, somehow, Web Wizards made it to a listing on the OTC securities market. In 2012, PFN Holdings needed money and, to be frank, I think that Chang needed a real business. Or, at least, something that looked like one. In a reverse-merger, PFN Holdings joined Web Wizards and renamed to Psychic Friends Network Inc., PFNI on the OTC bulletin board. The deal was financed by Right Power Services, a British Virgin Islands company (or was it a Singapore company? accounts disagree), also linked to Chang. Supposedly, there were millions in capital. Supposedly, exciting things were to come for PFN. Penny stocks are stocks that trade at low prices, under $5 or even more classically under $1. Because these prices are too low to quality for listing on exchanges, they trade on less formal, and less heavily regulated, over-the-counter markets. Related to penny stocks are microcap stocks, stocks of companies with very small market capitalizations. These companies, being small and obscure, typically see miniscule trading volumes as well. The low price, low volume, and thus high volatility of penny stocks makes them notoriously prone to manipulation. Fraud is rampant on OTC markets, and if you look up a few microcap names it's not hard to fall into a sort of alternate corporate universe. There exists what I call the "pseudocorporate world," an economy that relates to "real" business the same way that pseudoscience relates to science. Pseudocorporations have much of the ceremony of their legitimate brethren, but none of the substance. They have boards, executives, officers, they issue press releases, they publish annual reports. What they conspicuously lack is a product, or a business. Like NFTs or memecoins, they are purely tokens for speculation, and that speculation is mostly pumping and dumping. Penny stock pseudocompanies intentionally resemble real ones; indeed, their operation, to the extent that they have one, is to manufacture the appearance of operating. They announce new products, that will never materialize, they announce new partnerships, that will never amount to anything, they announce mergers, that never close. They also rearrange their executive leadership with impressive frequency, due in no small part to the tendency of those leaders to end up in trouble with the SEC. All of this means that it's very difficult to untangle their history, and often hard to tell if they were once real companies that were hollowed out and exploited by con men, or whether they were a sham all along. Web Wizards does not appear to have had any purpose prior to its merger with PFN, and as part of the merger deal the Laskys became the executive leadership of the new company. They seem to have legitimately approached the transaction as a way to raise capital for PFN, because immediately after the merger they announced PFN's ambitious future. This new PFN would be an all-online operation using live webcasts and 1:1 video calling. The PFN website became a landing page for their new membership service, and the Laskys were primed to produce a new series of TV spots. Little more would ever be heard of this. In 2014, PFN Inc renamed itself to "Peer to Peer Network Inc.," announcing their intent to capitalize on PFN's early gig work model by expanding the company into other "peer to peer" industries. The first and only venture Peer to Peer Network (PTOP on OTC Pink) announced was an acquisition of 321Lend, a silicon valley software startup that intended to match accredited investors with individuals needing loans. Neither company seems to have followed up on the announcement, and a year later 321Lend announced its acquisition by Loans4Less, so it doesn't seem that the deal went through. I might be reading too much between the lines, but I think there was a conflict between the Laskys, who had a fairly sincere intent to operate the PFN as a business, and the revolving odd lot of investors and executives that seem to grow like mold on publicly-traded microcap companies. Back in 2010, a stockbroker named Joshua Sodaitis started work on a transit payment and routing app called "Freemobicard." In 2023, he was profiled in Business Leaders Review, one of dozens of magazines, podcasts, YouTube channels, and Medium blogs that exist to provide microcap executives with uncritical interviews that create the resemblance of notability. The Review says Sodaitis "envisioned a future where seamless, affordable, and sustainable transportation would be accessible to all." Freemobicard, the article tells us, has "not only transformed the way people travel but has also contributed to easing traffic congestion and reducing carbon emissions." It never really says what Freemobicard actually is, but that doesn't matter, because by the time it gets involved in our story Sodaitis had completely forgotten about the transportation thing anyway. In 2015, disagreements between the psychic promoters and the stock promoters had come to a head. Attributing the move to differences in business vision, the Laskys bought the Psychic Friends Network assets out of Peer to Peer Network for $20,000 and resigned their seats on PTOP's board. At about the same time, PTOP announced a "licensing agreement" with a software company called Code2Action. The licensing agreement somehow involved Code2Action's CEO, Christopher Esposito, becoming CEO of PTOP itself. At this point Code2Action apparently rolled up operations, making the "licensing agreement" more of a merger, but the contract as filed with the SEC does indeed read as a license agreement. This is just one of the many odd and confusing details of PTOP's post-2015 corporate governance. I couldn't really tell you who Christopher Esposito is or where he came from, but he seems to have had something to do with Joshua Sodaitis, because he would eventually bring Sodaitis along as a board member. More conspicuously, Code2Action's product was called Mobicard---or Freemobicard, depending on which press release you read. This Mobicard was a very different one, though. Prior to the merger it was some sort of SMS marketing product (a "text this keyword to this shortcode" type of autoresponse/referral service), but as PTOP renamed itself to Mobicard Inc. (or at least announced the intent to, I don't think the renaming ever actually happened) the vision shifted to the lucrative world of digital business cards. Their mobile app, Mobicard 1.0, allowed business professionals to pay a monthly fee to hand out a link to a basic profile webpage with contact information and social media links. Kind of like Linktree, but with LinkedIn vibes, higher prices, and less polish. One of the things you'll notice about Mobicard is that, for a software company, they were pretty short on software engineers. Every version of the products (and they constantly announce new ones, with press releases touting Mobicard 1.5, 1.7, and 2.0) seems to have been contracted to a different low-end software house. There are demo videos of various iterations of Mobicard, and they are extremely underwhelming. I don't think it really mattered, PTOP didn't expect Mobicard to make money. Making money is not the point of a microcap pseudocompany. That same year, Code2Action signed another license agreement, much like the PTOP deal, but with a company called Cannabiz. Or maybe J M Farms Patient Group, the timeline is fuzzy. This was either a marketing company for medical marijuana growers or a medical marijuana grower proper, probably varying before and after they were denied a license by the state of Massachusetts on account of the criminal record of one of the founders. The whole cannabis aside only really matters because, first, it matches the classic microcap scam pattern of constantly pivoting to whatever is new and hot (which was, for a time, newly legalized cannabis), and second, because a court would later find that Cannabiz was a vehicle for securities fraud. Esposito had a few years of freedom first, though, to work on his new Peer to Peer Network venture. He made the best of it: PTOP issued a steady stream of press releases related to contracts for Mobicard development, the appointment of various new executives, and events as minor as having purchased a new domain name. Despite the steady stream of mentions in the venerable pages of PRNewswire, PTOP doesn't seem to have actually done anything. In 2015, 2016, 2017, and 2018, PTOP failed to complete financial audits and SEC reports. To be fair, in 2016 Esposito was fined nearly $100,000 by the SEC as part of a larger case against Cannabiz and its executives. He must have had a hard time getting to the business chores of PTOP, especially since he had been barred from stock promotion. In 2018, with PTOP on the verge of delisting due to the string of late audits, Joshua Sodaitis was promoted to CEO and Chairman of "Peer to Peer Network, Inc., (Stock Ticker Symbol PTOP) a.k.a. Mobicard," "the 1st and ONLY publicly traded digital business card company." PTOP's main objective became maintaining its public listing, and for a couple of years most discussion of the actual product stopped. In 2020, PTOP made the "50 Most Admired Companies" in something called "The Silicon Valley Review," which I assume is prestigious and conveniently offers a 10% discount if you nominate your company for one of their many respected awards right now. "This has been a monumental year for the company," Sodaitis said, announcing that they had been granted two (provisional) patents and appointed a new advisory board (including one member "who is self-identified as a progressive millennial" and another who was a retired doctor). The bio of Sodaitis mentions the Massachusetts medical marijuana venture, using the name of the company that was denied a license and shuttered by the SEC, not the reorganized replacement. Sodaitis is not great with details. It's hard to explain Mobicard because of this atmosphere of confusion. There was the complete change in product concept, which is itself confusing, since Sodaitis seems to have given the interview where he discussed Mobicard as a transportation app well after he had started describing it as a digital business card. Likewise, Mobicard has a remarkable number of distinct websites. freemobicard.com, mobicard.com, ptopnetwork.com, and mobicards.ca all seem oddly unaware of each other, and as the business plan continues to morph, are starting to disagree on what mobicard even is. The software contractor or staff developing the product keep changing, as does the version of mobicard they are about to launch. And on top of it all are the press releases. Oh, the press releases. There's nary a Silicon Valley grift unmentioned in PTOP's voluminous newswire output. Crypto, the Metaverse, and AI all make appearances as part of the digital business card vision. As for the tone, the headlines speak for themselves. "MOBICARD Set for Explosive Growth in 2024" "MobiCard's Digital Business Card Revolutionizes Networking & Social Media" "MOBICARD Revolutionizes Business Cards" "Peer To Peer Network, aka Mobicard™ Announces Effective Form C Filing with the SEC and Launch of Reg CF Crowdfunding Campaign" "Joshua Sodaitis, Mobicard, Inc. Chairman and CEO: 'We’re Highly Committed to Keeping Our 'One Source Networking Solution' Relevant to the Ever-Changing Dynamics of Personal and Professional Networking'" "PTOP ANNOUNCES THE RESUBMISSION OF THE IMPROVED MOBICARD MOBILE APPS TO THE APPLE STORE AND GOOGLE PLAY" "Mobicard™ Experienced 832% User Growth in Two Weeks" "Peer To Peer Network Makes Payment to Attorney To File A Provisional Patent for Innovative Technology" Yes, this company issues a press release when they pay an invoice. To be fair, considering the history of bankruptcy, maybe that's more of an achievement than it sounds. In one "interview" with a "business magazine," Sodaitis talks about why Mobicard has taken so long to reach maturity. It's the Apple app store review, he explains, a story to which numerous iOS devs will no doubt relate. Besides, based on their press releases, they have had to switch contractors and completely redevelop the product multiple times. I didn't know that the digital business card was such a technical challenge. Sodaitis has been working on it for perhaps as long as fifteen years and still hasn't quite gotten to MVP. You know where this goes, don't you? After decades of shady characters, trouble with regulators, cosplaying at business, and outright scams, there's only one way the story could possibly end. All the way back in 2017, PTOP announced that they were "Up 993.75% After Launch Of Their Mobicoin Cryptocurrency." PTOP, the release continues, "saw a truly Bitcoin-esque move today, completely outdoing the strength of every other stock trading on the OTC market." PTOPs incredible market move was, of course, from $0.0005 to $0.0094. With 22 billion shares of common stock outstanding, that gave PTOP a valuation of over $200 million my the timeless logic of the crypto investor. Of course, PTOP wasn't giving up on their OTC listing, and with declining Bitcoin prices their interest in the cryptocurrency seems to have declined as well. That was, until the political and crypto market winds shifted yet again. Late last year, PTOP was newly describing Mobicoin as a utility token. In November, they received a provisional patent on "A Cryptocurrency-Based Platform for Connecting Companies and Social Media Users for Targeted Marketing Campaigns." This is the latest version of Mobicard. As far as I can tell, it's now a platform where people are paid in cryptocurrency for tweeting advertising on behalf of a brand. PTOP had to beef up their crypto expertise for this exciting new frontier. Last year, they hired "Renowned Crypto Specialist DeFi Mark," proprietor of a cryptocurrency casino and proud owner of 32,000 Twitter followers. "With Peer To Peer Network, we're poised to unleash the power of blockchain, likely triggering a significant shift in the general understanding of web3," he said. "I have spoken to our Senior Architect Jay Wallace who is a genius at what he does and he knows that we plan to Launch Mobicard 1.7 with the MOBICOIN fully implemented shortly after the New President is sworn into office. I think this is a great time to reintroduce the world to MOBICOIN™ regardless of how I, or anyone feels about politics we can't deny the Crypto markets exceptional increase in anticipation to major regulatory transformations. I made it very clear to our Tech Team leader that this is a must to launch Mobicard™ 1.7. Well, they've outdone themselves. Just two weeks ago, they announced Mobicard 2.0. "With enhanced features like real-time analytics, seamless MOBICOIN™ integration, and enterprise-level onboarding for up to 999 million employees, this platform is positioned to set new standards in the digital business card industry." And how does that cryptocurrency integration work? "Look the Mobicard™ Reward system is simple. We had something like it previously implemented back in 2017. If a MOBICARD™ user shares his MOBICARD™ 50 times in one week then he will be rewarded with 50 MOBICOIN's. If a MOBICARD user attends a conference and shares his digital business card MOBICARD™ with 100 people he will be granted 100 MOBICOIN™'s." Yeah, it's best not to ask. I decided to try out this innovative new digital business card experience, although I regret to say that the version in the Play Store is only 1.5. I'm sure they're just waiting on app store review. The dashboard looks pretty good, although I had some difficulty actually using it. I have not so far been able to successfully create a digital business card, and most of the tabs just lead to errors, but I have gained access to four or five real estate brokers and CPAs via the "featured cards." One of the featured cards is for Christopher Esposito, listed as "Crypto Dev" at NRGai. Somewhere around 2019, Esposito brought Code2Action back to life again. He promoted a stock offering, talking up the company's bright future and many promising contracts. You might remember that this is exactly the kind of thing that the SEC got him for in 2016, and the SEC dutifully got him again. He was sentenced to five of probation after a court found that he had lied about a plan to merge Code2Action with another company and taken steps to conceal the mass sale of his own stock in the company. NRGai, or NRG4ai, they're inconsistent, is a token that claims to facilitate the use of idle GPUs for crypto training. According to one analytics website, it has four holders and trades at $0.00. The Laskys have moved on as well. Michael Lasky is now well into retirement, but Marc Lasky is President & Director of Fernhill Corporation, "a publicly traded Web3 Enterprise Software Infrastructure company focused on providing cloud based APIs and solutions for digital asset trading, NFT marketplaces, data aggregation and DeFi/Lending". Fernhill has four subsidiaries, ranging from a cryptocurrency market platform to mining software. None appear to have real products. Fernhill is trading on OTC Pink at $0.00045. Joshua Sodaitis is still working on Mobicard. Mobicard 2.0 is set for a June 1 launch date, and promises to "redefine digital networking and position [PTOP] as the premier solution in the digital business card industry." "With these exciting developments, we anticipate a positive impact on the price of PTOP stock." PTOP is trading on OTC Pink at $0.00015. Michael Lasky was reportedly fond of saying that "you can get more money from people over the telephone than using a gun." As it happens, he wielded a gun anyway, but he had a big personality like that. One wonders what he would say about the internet. At some point, in his golden years, he relaunched his handicapping business Mike Warren Sports. The website sold $97/month subscriptions for tips on the 2015 NFL and NCAA football seasons, and the customer testimonials are glowing. One of them is from CNN's Larry King, although it doesn't read much like a testimonial, more like an admission that he met Lasky once. There might still be some hope. A microcap investor, operating amusingly as "FOMO Inc.," has been agitating to force a corporate meeting for PTOP. PTOP apparently hasn't held one in years, is once again behind on audits, and isn't replying to shareholder inquiries. Investors allege poor management by Sodaitis. The demand letter, in a list of CC'd shareholders the author claims to represent by proxy, includes familiar names: Mike and Marc Lasky. They never fully divested themselves of their kind-of-sort-of former company. A 1998 article in the Baltimore Sun discussed Lasky's history as a handicapper. It quotes a former Inphomation employee, whose preacher father once wore a "Mike Warren Sports" sweater at the mall. "A woman came up to him and said 'Oh, I believe in him, Mike Warren.' My father says, 'well, ma'am, everybody has to believe in something." Lasky built his company on predicting the future, but of course, he was only ever playing the odds. Eventually, both turned on him. His company fell to a series of bad bets, and his scam fell to technological progress. Everyone has to believe in something, though, and when one con man stumbles there are always more ready to step in.

a month ago 19 votes
2025-02-02 residential networking over telephone

Recently, I covered some of the history of Ethernet's tenuous relationship with installed telephone cabling. That article focused on the earlier and more business-oriented products, but many of you probably know that there have been a number of efforts to install IP networking over installed telephone wiring in a residential and SOHO environment. There is a broader category of "computer networking over things you already have in your house," and some products remain pretty popular today, although seemingly less so in the US than in Europe. The grandparent of these products is probably PhoneNet, a fairly popular product introduced by Farallon in the mid-'80s. At the time, local area networking for microcomputers was far from settled. Just about every vendor had their own proprietary solution, although many of them had shared heritage and resulting similarities. Apple Computer was struggling with the situation just like everyone; in 1983 they introduced an XNS-based network stack for the Lisa called AppleNet and then almost immediately gave up on it [1]. Steve Jobs made the call to adopt IBM's token ring instead, which would have seemed like a pretty safe bet at the time because of IBM's general prominence in the computing industry. Besides, Apple was enjoying a period of warming relations with IBM, part of the 1980s-1990s pattern of Apple and Microsoft alternately courting IBM as their gateway into business computing. The vision of token ring as the Apple network standard died the way a lot of token ring visions did, to the late delivery and high cost of IBM's design. While Apple was waiting around for token ring to materialize, they sort of stumbled into their own LAN suite, AppleTalk [2]. AppleTalk was basically an expansion of the unusually sophisticated peripheral interconnect used by the Macintosh to longer cable runs. Apple put a lot of software work into it, creating a pretty impressive zero-configuration experience that did a lot to popularize the idea of LANs outside of organizations large enough to have dedicated network administrators. The hardware was a little more, well, weird. In true Apple fashion, AppleTalk launched with a requirement for weird proprietary cables. To be fair, one of the reasons for the system's enduring popularity was its low cost compared to Ethernet or token ring. They weren't price gouging on the cables the way they might seem to today. Still, they were a decided inconvenience, especially when trying to connect machines across more than one room. One of the great things about AppleTalk, in this context, is that it was very slow. As a result, even though the physical layer was basically RS-422, the electrical requirements for the cabling were pretty relaxed. Apple had already taken advantage of this for cost reduction, using a shared signal ground on the long cables rather than the dedicated differential pairs typical for RS-422. A hobbyist realized that you could push this further, and designed a passive dongle that used telephone wiring as a replacement for Apple's more expensive dongle and cables. He filed a patent and sold it to Farallon, who introduced the product as PhoneNet. PhoneNet was a big hit. It was cheaper than Apple's solution for the same performance, and even better, because AppleTalk was already a bus topology it could be used directly over the existing parallel-wired telephone cabling in a typical house or small office. For a lot of people with heritage in the Apple tradition of computing, it'll be the first LAN they ever used. Larger offices even used it because of the popularity of Macs in certain industries and the simplicity of patching their existing telephone cables for AppleTalk use; in my teenage years I worked in an office suite in downtown Portland that hadn't seen a remodel for a while and still had telephone jacks labeled "PhoneNet" at the desks. PhoneNet had one important limitation compared to the network-over-telephone products that would follow: it could not coexist with telephony. Well, it could, in a sense, and was advertised as such. But PhoneNet signaled within the voice band, so it required dedicated telephone pairs. In a lot of installations, it could use the second telephone line that was often wired but not actually used. Still, it was a bust for a lot of residential installs where only one phone line was fully wired and already in use for phone calls. As we saw in the case of Ethernet, local area networking standards evolved very quickly in the '80s and '90s. IP over Ethernet became by far the dominant standard, so the attention of the industry shifted towards new physical media for Ethernet frames. While 10BASE-T Ethernet operated over category 3 telephone wiring, that was of little benefit in the residential market. Commercial buildings typically had "home run" telephone wiring, in which each office's telephone pair ran directly to a wiring closet. In residential wiring of the era, this method was almost unheard of, and most houses had their telephone jacks wired in parallel along a small number of linear segments (often just one). This created a cabling situation much like coaxial Ethernet, in which each telephone jack was a "drop" along a linear bus. The problem is that coaxial Ethernet relied on several different installation measures to make this linear bus design practical, and home telephone wiring had none of these advantages. Inconsistently spaced drops, side legs, and a lack of termination meant that reflections were a formidable problem. PhoneNet addressed reflections mainly by operating at a very low speed (allowing reflections to "clear out" between symbols), but such low bitrate did not befit the 1990s. A promising solution to the reflection problem came from a system called Tut Systems. Tut's history is unfortunately obscure, but they seem to have been involved in what we would now call "last-mile access technologies" since the 1980s. Tut would later be acquired by Motorola, but not before developing a number of telephone-wiring based IP networks under names like HomeWire and LongWire. A particular focus of Tut was multi-family housing, which will become important later. I'm not even sure when Tut introduced their residential networking product, but it seems like they filed a relevant patent in 1995, so let's say around then. Tut's solution relied on pulse position modulation (PPM), a technique in which data is encoded by the length of the spacing between pulses. The principal advantage of PPM is that it allows a fairly large number of bits to be transmitted per pulse (by using, say, 16 potential pulse positions to encode 4 bits). This allowed reflections to dissipate between pulses, even at relatively high bitrates. Following a bit of inter-corporate negotiation, the Tut solution became an industry standard under the HomePNA consortium: HomePNA 1.0. HomePNA 1.0 could transmit 1Mbps over residential telephone wiring with up to 25 devices. A few years later, HomePNA 1.0 was supplanted by HomePNA 2.0, which replaced PPM with QAM (a more common technique for high data rates over low bandwidth channels today) and in doing so improved to 10Mbps for potentially thousands of devices. I sort of questioned writing an article about all of these weird home networking media, because the end-user experience for most of them is pretty much the same. That makes it kind of boring to look at them one by one, as you'll see later. Fortunately, HomePNA has a property that makes it interesting: despite a lot of the marketing talking more about single-family homes, Tut seems to have envisioned HomePNA mainly as a last-mile solution for multi-family housing. That makes HomePNA a bit different than later offerings, landing in a bit of a gray area between the LAN and the access network. The idea is this: home run wiring is unusual in residential buildings, but in apartment and condo buildings, it is typical for the telephone lines of each unit to terminate in a wiring closet. This yields a sort of hybrid star topology where you have one line to each unit, and multiple jacks in each unit. HomePNA took advantage of this wiring model by offering a product category that is at once bland and rather unusual for this type of media: a hub. HomePNA hubs are readily available, even today in used form, with 16 or 24 HomePNA interfaces. The idea of a hub can be a little confusing for a shared-bus media like HomePNA, but each interface on these hubs is a completely independent HomePNA network. In an apartment building, you could connect one interface to the telephone line of each apartment, and thus offer high-speed (for the time) internet to each of your tenants using existing infrastructure. A 100Mbps Ethernet port on the hub then connected to whatever upstream access you had available. The use of the term "hub" is a little confusing, and I do believe that at least in the case of HomePNA 2.0, they were actually switching devices. This leads to some weird labeling like "hub/switch," perhaps a result of the underlying oddity of a multi-port device on a shared-media network that nonetheless performs no routing. There's another important trait of HomePNA 2.0 that we should discuss, at least an important one to the historical development of home networking. HomePNA 1.0 was designed not to cause problematic interference with telephone calls but still effectively signaled within the voice band. HomePNA 2.0's QAM modulation addressed this problem completely: it signaled between 4MHz and 10MHz, which put it comfortably above not only the voice band but the roughly up-to-1MHz band used by early ADSL. HomePNA could coexist with pretty much anything else that would have been used on a telephone line at the time. Over time, control of HomePNA shifted away from Tut Systems and towards a competitor called Epigram, who had developed the QAM modulation for HomePNA 2.0. Later part of Broadcom, Epigram also developed a 100Mbps HomePNA 3.0 in 2005. The wind was mostly gone from HomePNA's sails by that point, though, more due to the rise of WiFi than anything else. There was a HomePNA 3.1, which added support for operation over cable TV wiring, but shortly after, in 2009, the HomePNA consortium endorsed the HomeGrid Forum as a successor. A few years later, HomePNA merged into HomeGrid Forum and faded away entirely. The HomeGrid Forum is the organization behind G.hn, which is to some extent a successor of HomePNA, although it incorporates other precedents as well. G.hn is actually fairly widely used for the near-zero name recognition it enjoys, and I can't help but suspect that that's a result of the rather unergonomic names that ITU standards tend to take on. "G.hn" kind-of-sort-of stands for Gigabit Home Networking, which is at least more memorable than the formal designation G.9960, but still isn't at all distinctive. G.hn is a pretty interesting standard. It's quite sophisticated, using a complex and modern modulation scheme (OFDM) along with forward error correction. It is capable of up to 2Gbps in its recent versions, and is kind of hard to succinctly discuss because it supports four distinct physical media: telephone, coaxial (TV) cable, powerline, and fiber. G.hn's flexibility is probably another reason for its low brand recognition, because it looks very different in different applications. Distinct profiles of G.hn involve different band plans and signaling details for each physical media, and it's designed to coexist with other protocols like ADSL when needed. Unlike HomePNA, multi-family housing is not a major consideration in the design of G.hn and combining multiple networks with a "hub/switch" is unusual. There's a reason: G.hn wasn't designed by access network companies like Tut; it was mostly designed in the television set-top box (STB) industry. When G.hn hit the market in 2009, cable and satellite TV was rapidly modernizing. The TiVo had established DVRs as nearly the norm, and then pushed consumers further towards the convenience of multi-room DVR systems. Providing multi-room satellite TV is actually surprisingly complex, because STV STBs (say that five times fast) actually reconfigure the LNA in the antenna as part of tuning. STB manufacturers, dominated by EchoStar (at one time part of Hughes and closely linked to the Dish Network), had solved this problem by making multiple STBs in a home communicate with each other. Typically, there is a "main" STB that actually interacts with the antenna and decodes TV channels. Other STBs in the same house use the coaxial cabling to communicate with the main STB, requesting video signals for specific channels. Multi-room DVR was basically an extension of this same concept. One STB is the actual DVR, and other STBs remote-control it, scheduling recordings and then having the main STB play them back, transmitting the video feed over the in-home coaxial cabling. You can see that this is becoming a lot like HomePNA, repurposing CATV-style or STV-style coaxial cabling as a general-purpose network in which peer devices can communicate with each other. As STB services have become more sophisticated, "over the top" media services and "triple play" combo packages have become an important and lucrative part of the home communications market. Structurally, these services can feel a little clumsy, with an STB at the television and a cable modem with telephone adapters somewhere else. STBs increasingly rely on internet-based services, so you then connect the STB to your WiFi so it can communicate via the same cabling but a different modem. It's awkward. G.hn was developed to unify these communications devices, and that's mostly how it's used. Providers like AT&T U-verse build G.hn into their cable television devices so that they can all share a DOCSIS internet connection. There are two basic ways of employing G.hn: first, you can use it to unify devices. The DOCSIS modem for internet service is integrated into the STB, and then G.hn media adapters can provide Ethernet connections wherever there is an existing cable drop. Second, G.hn can also be applied to multi-family housing, by installing a central modem system in the wiring closet and connecting each unit via G.hn. Providers that have adopted G.hn often use both configurations depending on the customer, so you see a lot of STBs these days with G.hn interfaces and extremely flexible configurations that allow them to either act as the upstream internet connection for the G.hn network, or to use a G.hn network that provides internet access from somewhere else. The same STB can thus be installed in either a single-family home or a multi-family unit. We should take a brief aside here to mention MoCA, the Multimedia over Coax Alliance. MoCA is a somewhat older protocol with a lot of similarities to G.hn. It's used in similar ways, and to some extent the difference between the two just comes down to corporate alliances: AT&T is into G.hn, but Cox, both US satellite TV providers, and Verizon have adopted MoCA, making it overall the more common of the two. I just think it's less interesting. Verizon FiOS prominently uses MoCA to provide IP-based television service to STBs, via an optical network terminal that provides MoCA to the existing CATV wiring. We've looked at home networking over telephone wiring, and home networking over coaxial cable. What about the electrical wiring? G.hn has a powerline profile, although it doesn't seem to be that widely used. Home powerline networking is much more often associated with HomePlug. Well, as it happens, HomePlug is sort of dead, the industry organization behind it having wrapped up operations in 2016. That might not be such a big practical problem, though, as HomePlug is closely aligned with related IEEE standards for data over powerline and it's widely used in embedded applications. As a consumer product, HomePlug will be found in the form of HomePlug AV2. AV2 offers Gigabit-plus data rates over good quality home electrical wiring, and compared to G.hn and MoCA it enjoys the benefit that standalone, consumer adapters are very easy to buy. HomePlug selects the most complex modulation the wiring can support (typically QAM with a large constellation size) and uses multiple OFDM carriers in the HF band, which it transmits onto the neutral conductor of an outlet. The neutral wiring in the average house is also joined at one location in the service panel, so it provides a convenient shared bus. On the downside, the installation quality of home electrical wiring is variable and the neutral conductor can be noisy, so some people experience very poor performance from HomePlug. Others find it to be great. It really depends on the situation. That brings us to the modern age: G.hn, MoCA, and HomePlug are all more or less competing standards for data networking using existing household wiring. As a consumer, you're most likely to use G.hn or MoCA if you have an ISP that provides equipment using one of the two. Standalone consumer installations, for people who just want to get Ethernet from one place to another without running cable, usually use HomePlug. It doesn't really have to be that way, G.hn powerline adapters have come down in price to where they compete pretty directly with HomePlug. Coaxial-cable and telephone-cable based solutions actually don't seem to be that popular with consumers any more, so powerline is the dominant choice. I can take a guess at the reason: electrical wiring can be of questionable quality, but in a lot of houses I see the coaxial and telephone wiring is much worse. Some people have outright removed the telephone wiring from houses, and the coaxial plant has often been through enough rounds of cable and satellite TV installers that it's a bit of a project to sort out which parts are connected. A large number of cheap passive distribution taps, common in cable TV where the signal level from the provider is very high, can be problematic for coaxial G.hn or MoCA. It's usually not hard to fix those problems, but unless an installer from the ISP sorts it out it usually doesn't happen. For the consumer, powerline is what's most likely to work. And, well, I'm not sure that any consumers care any more. WiFi has gotten so fast that it often beats the data rates achievable by these solutions, and it's often more reliable to boot. HomePlug in particular has a frustrating habit of working perfectly except for when something happens, conditions degrade, the adapters switch modulations, and the connection drops entirely for a few seconds. That's particularly maddening behavior for gamers, who are probably the most likely to care about the potential advantages of these wired solutions over WiFi. I expect G.hn, MoCA, and HomePlug to stick around. All three have been written into various embedded standards and adopted by ISPs as part of their access network in multi-family or at least as an installation convenience in single-family contexts. But I don't think anyone really cares about them any more, and they'll start to feel as antiquated as HomePNA. And here's a quick postscript to show how these protocols might adapt to the modern era: remember how I said G.hn can operate over fiber? Cheap fiber, too, the kind of plastic cables used by S/PDIF. The HomePlug Forum is investigating the potential of G.hn over in-home passive optical networks, on the theory that these passive optical networks can be cheaper (due to small conductor size and EMI tolerance) and more flexible (due to the passive bus topology) than copper Ethernet. I wouldn't bet money on it, given the constant improvement of WiFi, but it's possible that G.hn will come back around for "fiber in the home" internet service. [1] XNS was a LAN suite designed by Xerox in the 1970s. Unusually for the time, it was an openly published standard, so a considerable number of the proprietary LANs of the 1980s were at least partially based on XNS. [2] The software sophistication of AppleTalk is all the more impressive when you consider that it was basically a rush job. Apple was set to launch LaserWriter, and as I mentioned recently on Mastodon, it was outrageously expensive. LaserWriter was built around the same print engine as the first LaserJet and still cost twice as much, due in good part to its flexible but very demanding PostScript engine. Apple realized it would never sell unless multiple Macintoshes could share it---it cost nearly as much as three Mac 128ks!---so they absolutely needed to have a LAN solution ready. LaserWriter would not wait for IBM to get their token ring shit together. This is a very common story of 1980s computer networks; it's hard to appreciate now how much printer sharing was one of the main motivations for networking computers at all. There's this old historical theory that hasn't held up very well but is appealing in its simplicity, that civilization arises primarily in response to the scarcity of water and thus the need to construct irrigation works. You could say that microcomputer networking arises primarily in response to the scarcity of printers.

2 months ago 14 votes

More in technology

A tricky Commodore PET repair: tracking down 6 1/2 bad chips

.cite { font-size: 70%;} .ref { vertical-align: super; font-size: 60%;} code {font-size: 100%; font-family: courier, fixed;} In 1977, Commodore released the PET computer, a quirky home computer that combined the processor, a tiny keyboard, a cassette drive for storage, and a trapezoidal screen in a metal unit. The Commodore PET, the Apple II, and Radio Shack's TRS-80 started the home computer market with ready-to-run computers, systems that were called in retrospect the 1977 Trinity. I did much of my early programming on the PET, so when someone offered me a non-working PET a few years ago, I took it for nostalgic reasons. You'd think that a home computer would be easy to repair, but it turned out to be a challenge.1 The chips in early PETs are notorious for failures and, sure enough, we found multiple bad chips. Moreover, these RAM and ROM chips were special designs that are mostly unobtainable now. In this post, I'll summarize how we repaired the system, in case it helps anyone else. When I first powered up the computer, I was greeted with a display full of random characters. This was actually reassuring since it showed that most of the computer was working: not just the monitor, but the video RAM, character ROM, system clock, and power supply were all operational. The Commodore PET started up, but the screen was full of garbage. With an oscilloscope, I examined signals on the system bus and found that the clock, address, and data lines were full of activity, so the 6502 CPU seemed to be operating. However, some of the data lines had three voltage levels, as shown below. This was clearly not good, and suggested that a chip on the bus was messing up the data signals. The scope shows three voltage levels on the data bus. Some helpful sites online7 suggested that if a PET gets stuck before clearing the screen, the most likely cause is a failure of a system ROM chip. Fortunately, Marc has a Retro Chip Tester, a cool device designed to test vintage ICs: not just 7400-series logic, but vintage RAMs and ROMs. Moreover, the tester knows the correct ROM contents for a ton of old computers, so it can tell if a PET ROM has the right contents. The Retro Chip Tester showed that two of the PET's seven ROM chips had failed. These chips are MOS Technologies MPS6540, a 2K×8 ROM with a weird design that is incompatible with standard ROMs. Fortunately, several people make adapter boards that let you substitute a standard 2716 EPROM, so I ordered two adapter boards, assembled them, and Marc programmed the 2716 EPROMs from online data files. The 2716 EPROM requires a bit more voltage to program than Marc's programmer supported, but the chips seemed to have the right contents (foreshadowing). The PET opened, showing the motherboard. The PET's case swings open with an arm at the left to hold it open like a car hood. The first two rows of chips at the front of the motherboard are the RAM chips. Behind the RAM are the seven ROM chips; two have been replaced by the ROM adapter boards. The 6502 processor is the large black chip behind the ROMs, toward the right. With the adapter boards in place, I powered on the PET with great expectations of success, but it failed in precisely the same way as before, failing to clear the garbage off the screen. Marc decided it was time to use his Agilent 1670G logic analyzer to find out what was going on; (Dating back to 1999, this logic analyzer is modern by Marc's standards.) He wired up the logic analyzer to the 6502 chip, as shown below, so we could track the address bus, data bus, and the read/write signal. Meanwhile, I disassembled the ROM contents using Ghidra, so I could interpret the logic analyzer against the assembly code. (Ghidra is a program for reverse-engineering software that was developed by the NSA, strangely enough.) Marc wired up the logic analyzer to the 6502 chip. The logic analyzer provided a trace of every memory access from the 6502 processor, showing what it was executing. Everything went well for a while after the system was turned on: the processor jumped to the reset vector location, did a bit of initialization, tested the memory, but then everything went haywire. I noticed that the memory test failed on the first byte. Then the software tried to get more storage by garbage collecting the BASIC program and variables. Since there wasn't any storage at all, this didn't go well and the system hung before reaching the code that clears the screen. We tested the memory chips, using the Retro Chip Tester again, and found three bad chips. Like the ROM chips, the RAM chips are unusual: MOS Technology 6550 static RAM chip, 1K×4. By removing the bad chips and shuffling the good chips around, we reduced the 8K PET to a 6K PET. This time, the system booted, although there was a mysterious 2×2 checkerboard symbol near the middle of the screen (foreshadowing). I typed in a simple program to print "HELLO", but the results were very strange: four floating-point numbers, followed by a hang. This program didn't work the way I expected. This behavior was very puzzling. I could successfully enter a program into the computer, which exercises a lot of the system code. (It's not like a terminal, where echoing text is trivial; the PET does a lot of processing behind the scenes to parse a BASIC program as it is entered.) However, the output of the program was completely wrong, printing floating-point numbers instead of a string. We also encountered an intermittent problem that after turning the computer on, the boot message would be complete gibberish, as shown below. Instead of the "*** COMMODORE BASIC ***" banner, random characters and graphics would appear. The garbled boot message. How could the computer be operating well for the most part, yet also completely wrong? We went back to the logic analyzer to find out. I figured that the gibberish boot message would probably be the easiest thing to track down, since that happens early in the boot process. Looking at the code, I discovered that after the software tests the memory, it converts the memory size to an ASCII string using a moderately complicated algorithm.2 Then it writes the system boot message and the memory size to the screen. The PET uses a subroutine to write text to the screen. A pointer to the text message is held in memory locations 0071 and 0072. The assembly code below stores the pointer (in the X and Y registers) into these memory locations. (This Ghidra output shows the address, the instruction bytes, and the symbolic assembler instructions.) For the code above, you'd expect the processor to read the instruction bytes 86 and 71, and then write to address 0071. Next it should read the bytes 84 and 72 and write to address 0072. However, the logic analyzer output below showed that something slightly different happened. The processor fetched instruction bytes 86 and 71 from addresses D5AE and D5AF, then wrote 00 to address 0071, as expected. Next, it fetched instruction bytes 84 and 72 as expected, but wrote 01 to address 007A, not 0072! 007A 01 0 This was a smoking gun. The processor had messed up and there was a one-bit error in the address. Maybe the 6502 processor issued a bad signal or maybe something else was causing problems on the bus. The consequence of this error was that the string pointer referenced random memory rather than the desired boot message, so random characters were written to the screen. Next, I investigated why the screen had a mysterious checkerboard character. I wrote a program to scan the logic analyzer output to extract all the writes to screen memory. Most of the screen operations made sense—clearing the screen at startup and then writing the boot message—but I found one unexpected write to the screen. In the assembly code below, the Y register should be written to zero-page address 5e, and the X register should be written to the address 66, some locations used by the BASIC interpreter. However, the logic analyzer output below showed a problem. The first line should fetch the opcode 84 from address d3c8, but the processor received the opcode 8c from the ROM, the instruction to write to a 16-bit address. The result was that instead of writing to a zero-page address, the 6502 fetched another byte to write to a 16-bit address. Specifically, it grabbed the STX instruction (86) and used that as part of the address, writing FF (a checkerboard character) to screen memory at 865E3 instead of to the BASIC data structure at 005E. Moreover, the STX instruction wasn't executed, since it was consumed as an address. Thus, not only did a stray character get written to the screen, but data structures in memory didn't get updated. It's not surprising that the BASIC interpreter went out of control when it tried to run the program. 8C 1 186601 D3C9 5E 1 186602 D3CA 86 1 186603 865E FF 0 We concluded that a ROM was providing the wrong byte (8C) at address D3C8. This ROM turned out to be one of our replacements; the under-powered EPROM programmer had resulted in a flaky byte. Marc re-programmed the EPROM with a more powerful programmer. The system booted, but with much less RAM than expected. It turned out that another RAM chip had failed. Finally, we got the PET to run. I typed in a simple program to generate an animated graphical pattern, a program I remembered from when I was about 134, and generated this output: Finally, the PET worked and displayed some graphics. Imagine this pattern constantly changing. In retrospect, I should have tested all the RAM and ROM chips at the start, and we probably could have found the faults without the logic analyzer. However, the logic analyzer gave me an excuse to learn more about Ghidra and the PET's assembly code, so it all worked out in the end. In the end, the PET had 6 bad chips: two ROMs and four RAMs. The 6502 processor itself turned out to be fine.5 The photo below shows the 6 bad chips on top of the PET's tiny keyboard. On the top of each key, you can see the quirky graphical character set known as PETSCII.6 As for the title, I'm counting the badly-programmed ROM as half a bad chip since the chip itself wasn't bad but it was functioning erratically. The bad chips sitting on top of the keyboard. Follow me on Bluesky (@righto.com) or RSS for updates. (I'm no longer on Twitter.) Thanks to Mike Naberezny for providing the PET. Thanks to TubeTime, Mike Stewart, and especially CuriousMarc for help with the repairs. Some useful PET troubleshooting links are in the footnotes.7 Footnotes and references So why did I suddenly decide to restore a PET that had been sitting in my garage since 2017? Well, CNN was filming an interview with Bill Gates and they wanted background footage of the 1970s-era computers that ran the Microsoft BASIC that Bill Gates wrote. Spoiler: I didn't get my computer working in time for CNN, but Marc found some other computers.  ↩ Converting a number to an ASCII string is somewhat complicated on the 6502. You can't quickly divide by 10 for the decimal conversion, since the processor doesn't have a divide instruction. Instead, the PET's conversion routine has hard-coded four-byte constants: -100000000, 10000000, -100000, 100000, -10000, 1000, -100, 10, and -1. The routine repeatedly adds the first constant (i.e. subtracting 100000000) until the result is negative. Then it repeatedly adds the second constant until the result is positive, and so forth. The number of steps gives each decimal digit (after adjustment). The same algorithm is used with the base-60 constants: -2160000, 216000, -36000, 3600, -600, and 60. This converts the uptime count into hours, minutes, and seconds for the TIME$ variable. (The PET's basic time count is the "jiffy", 1/60th of a second.) ↩ Technically, the address 865E is not part of screen memory, which is 1000 characters starting at address 0x8000. However, the PET's address uses some shortcuts in address decoding, so 865E ends up the same as 825e, referencing the 7th character of the 16th line. ↩ Here's the source code for my demo program, which I remembered from my teenage programming. It simply displays blocks (black, white, or gray) with 8-fold symmetry, writing directly to screen memory with POKE statements. (It turns out that almost anything looks good with 8-fold symmetry.) The cryptic heart in the first PRINT statement is the clear-screen character. My program to display some graphics.  ↩ I suspected a problem with the 6502 processor because the logic analyzer showed that the 6502 read an instruction correctly but then accessed the wrong address. Eric provided a replacement 6502 chip but swapping the processor had no effect. However, reprogramming the ROM fixed both problems. Our theory is that the signal on the bus either had a timing problem or a voltage problem, causing the logic analyzer to show the correct value but the 6502 to read the wrong value. Probably the ROM had a weakly-programmed bit, causing the ROM's output for that bit to either be at an intermediate voltage or causing the output to take too long to settle to the correct voltage. The moral is that you can't always trust the logic analyzer if there are analog faults. ↩ The PETSCII graphics characters are now in Unicode in the Symbols for Legacy Computing block. ↩ The PET troubleshooting site was very helpful. The Commodore PET's Microsoft BASIC source code is here, mostly uncommented. I mapped many of the labels in the source code to the assembly code produced by Ghidra to understand the logic analyzer traces. The ROM images are here. Schematics of the PET are here. ↩↩

5 hours ago 1 votes
Humanities Crash Course Week 15: Boethius

In week 15 of the humanities crash course, we started making our way out of classical antiquity and into the Middle Ages. The reading for this week was Boethius’s The Consolation of Philosophy, a book perhaps second only to the Bible in influencing Medieval thinking. I used the beautiful edition from Standard Ebooks. Readings Boethius was a philosopher, senator, and Christian born shortly after the fall of the Western Roman Empire. After a long, fruitful, and respectable life, he fell out of favor with the Ostrogothic king Theodoric and was imprisoned and executed without a trial. He wrote The Consolation while awaiting execution. Boethius imagines being visited in prison by a mysterious woman, Lady Philosophy, who helps him put his situation in perspective. He bemoans his luck. Lady Philosophy explains that he can’t expect to have good fortune without bad fortune. She evokes the popular image of the Wheel of Fortune, whose turns sometimes bring benefits and sometimes curses. She argues that rather than focusing on fortune, Boethius should focus on the highest good: happiness. She identifies true happiness with God, who transcends worldly goods and standards. They then discuss free will — does it exist? Lady Philosophy argues that it does and that it doesn’t conflict with God’s eternal knowledge since God exists outside of time. And how does one square God’s goodness with the presence of evil in the world? Lady Philosophy redefines power and punishment, arguing that the wicked are punished by their evil deeds: what may seem to us like a blessing may actually be a curse. God transcends human categories, including being in time. We can’t know God’s mind with our limited capabilities — an answer that echos the Book of Job. Audiovisual Music: classical works related to death: Schubert’s String Quartet No. 14 and Mozart’s Requiem. I hadn’t heard the Schubert quartet before; reading about it before listening helped me contextualize the music. I first heard Mozart’s Requiem in one of my favorite movies, Miloš Forman’s AMADEUS. It’s long been one of my favorite pieces of classical music. A fascinating discovery: while re-visiting this piece in Apple’s Classical Music app, I learned that the app presents in-line annotations for some popular pieces as the music plays. Listening while reading these notes helped me understand this work better. It’s a great example of how digital media can aid understandability. Art: Hieronymus Bosch, Albrecht Dürer, and Pieter Bruegel the Elder. I knew all three’s work, but was more familiar with Bosch and Dürer than with Bruegel. These videos helped: Cinema: among films possibly related to Boethius, Perplexity recommended Fred Zinnemann’s A MAN OF ALL SEASONS (1966), which won six Academy Awards including best picture. It’s a biopic of Sir Thomas More (1478—1535). While well-shot, scripted, and acted I found it uneven — but relevant. Reflections I can see why Perplexity would suggest pairing this movie with this week’s reading. Both Boethius and More were upstanding and influential members of society unfairly imprisoned and executed for crossing their despotic rulers. (Theodoric and Henry VIII, respectively.) The Consolation of Philosophy had parallels with the Book of Job: both grapple with God’s agency in a world where evil exists. Job’s answer is that we’re incapable of comprehending the mind of God. Boethius refines the argument by proposing that God exists outside of time entirely, viewing all events in a single, eternal act of knowing. While less philosophically abstract, the movie casts these themes in more urgent light. More’s crime is being principled and refusing to allow pressure from an authoritarian regime to compromise his integrity. At one point, he says I believe, when statesmen forsake their own private conscience for the sake of their public duties… they lead their country by a short route to chaos. Would that more people in leadership today had More’s integrity. That said, learning about the film’s historical context makes me think it paints him as more saintly than he likely was. Still, it offers a powerful portrayal of a man willing to pay the ultimate price for staying true to his beliefs. Notes on Note-taking ChatGPT failed me for the first time in the course. As I’ve done throughout, I asked the LLM for summaries and explanations as I read. I soon realized ChatGPT was giving me information for a different chapter than the one I was reading. The problem was with the book’s structure. The Consolation is divided into five books; each includes a prose chapter followed by a verse poem. ChatGPT was likely trained on a version that numbered these sections differently than the one I was reading. It took considerable back and forth to get the LLM on track. At least it suggested useful steps to do so. Specifically, it asked me to copy the beginning sentence of each chapter so it could orient itself. After three or so chapters of this, it started providing accurate responses. The lesson: as good as LLMs are, we can’t take their responses at face value. In a context like this — i.e., using it to learn about books I’m reading — it helps keep me on my toes, which helps me retain more of what I’m reading. But I’m wary of using AI for subjects where I have less competency. (E.g., medical advice.) Also new this week: I’ve started capturing Obsidian notes for the movies I’m watching. I created a new template based on the one I use for literature notes, replacing the metadata fields for the author and publisher with director and studio respectively. Up Next Gioia recommends Sun Tzu and Lao Tzu. I’ve read both a couple of times; I’ll only revisit The Art of War at this time. (I read Ursula Le Guin’s translation of the Tao Te Ching last year, so I’ll skip it to make space for other stuff.) Again, there’s a YouTube playlist for the videos I’m sharing here. I’m also sharing these posts via Substack if you’d like to subscribe and comment. See you next week!

13 hours ago 1 votes
Reading List 04/12/2025

Solar PV adoption in Pakistan, a sodium-ion battery startup closing up shop, Figure’s humanoid robot progress, an AI-based artillery targeting system, and more.

yesterday 3 votes
Australian Air Force

If You're Switched On, This is Paradise.

yesterday 4 votes
DSLogic U3Pro16 Review and Teardown

Introduction The DSLogic U3Pro16 In the Box Probe Cables and Clips The Controller Hardware The Input Circuit Impact of Input Circuit on Circuit Under Test Additional IOs: External Clock, Trigger In, Trigger Out Software: From Saleae Logic to PulseView to DSView Installing DSView on a Linux Machine DSView UI Streaming Data to the Host vs Local Storage in DRAM Triggers Conclusion References Footnotes Introduction The year was 2020 and offices all over the world shut down. A house remodel had just started, so my office moved from a comfortably airconditioned corporate building to a very messy garage. Since I’m in the business of developing and debugging hardware, a few pieces of equipment came along for the ride, including a Saleae Logic Pro 16. I had the unit for work stuff, I may once in a while have used it for some hobby-related activities too. There’s no way around it: Saleae makes some of the best USB logic analyzers around. Plenty of competitors have matched or surpassed their digital features, but none have the ability to record the 16 channels in analog format as well. After corporate offices reopened, the Saleae went back to its original habitat and I found myself without a good 16-channel USB logic analyzer. Buying a Saleae for myself was out of the question: even after the $150 hobbyist discount, I can’t justify the $1350 price tag. After looking around for a bit, I decided to give the DSLogic U3Pro16 from DreamSourceLab a chance. I bought it on Amazon for $299. (Click to enlarge) In this blog post, I’ll look at some of the features, my experience with the software, and I’ll also open it up to discover what’s inside. The DSLogic U3Pro16 The DSLogic series currently consists of 3 logic analyzers: the $149 DSLogic Plus (16 channels) the $299 DSLogic U3Pro16 (16 channels) the $399 DSLogic U3Pro32 (32 channels) The DSLogic Plus and U3Pro16 both have 16 channels, but acquisition memory of the Plus is only 256Mbits vs 2Gbits for the Pro, and it has to make do with USB 2.0 instead of a USB 3.0 interface, a crucial difference when streaming acquistion data straight to the PC to avoid the limitations of the acquistion memory. There’s also a difference in sample rate, 400MHz vs 1GHz, but that’s not important in practice. The only functional difference between the U3Pro16 and U3Pro32 is the number of channels. It’s tempting to go for the 32 channel version but I’ve rarely had the need to record more than 16 channels at the same time and if I do, I can always fall back to my HP 1670G logic analyzer, a pristine $200 flea market treasure with a whopping 136 channels1. So the U16Pro it is! In the Box The DSLogic U16Pro comes with a nice, elongated hard case. Inside, you’ll find: the device itself. It has a slick aluminum enclosure. a USB-C to USB-A cable 5 4-way probe cables and 1 3-way clock and trigger cable 18 test clips Probe Cables and Clips You read it right, my unit came with 5 4-way probe cables, not 4. I don’t know if DreamSourceLab added one extra in case you lose one or if they mistakenly included one too much, but it’s good to have a spare. The cables are slightly stiffer than those that comes with a Saleae but not to the point that it adds a meaningful additional strain to the probe point. They’re stiffer because each of the 16 probe wires carries both signal and ground, probably a thin coaxial cable, which lowers the inductance of the probe and reduce ringing when measuring signal with fast rise and fall times. In terms of quality, the probe cables are a step up from the Saleae ones. The case is long enough so that the probe cables can be stored without bending them. The quality of the test clips is not great, but they are no different than those of the 5 times more expensive Saleae Logic 16 Pro. Both are clones of the HP/Agilent logic analyzer grabbers that I got from eBay and will do the job, but I much prefer the ones from Tektronix. The picture below shows 4 different grabbers. From left to right: Tektronix, Agilent, Saleae and DSLogic ones. Compared to the 3 others, the stem of the Tektronix probe is narrow which makes it easier to place multiple ones next to each other one fine-pitch pin arrays. If you’re thinking about upgrading your current probes to Tektronix ones: stay away from fakes. As I write this, you can find packs of 20 probes on eBay for $40 (incl shipping), so around $2 per probe. Search for “Tektronix SMG50” or “Tektronix 020-1386-01”. Meanwhile, you can buy a pack of 12 fake ones on Amazon for $16, or $1.3 a piece. They work, but they aren’t any better than the probes that come standard with the DSLogic. Fake probe on the left, Tek probe on the right The stem of the fake one is much thicker and the hooks are different too. The Tek probe has rounded hooks with a sharp angle at the tip: Tektronix hooks The hooks of a fake probe are flat and don’t attach nearly as well to their target: Fake hooks If you need to probe targets with a pitch that is smaller than 1.25mm, you should check out these micro clips that I reviewed ages ago. The Controller Hardware Each cable supports 4 probes and plugs into the main unit with 8 0.05” pins in 4x2 configuration, one pin for the signal, one pin for ground. The cable itself has a tiny PCB sticking out that slots into a gap of the aluminum enclosure. This way it’s not possible to plug in the cable incorrectly… unlike the Saleae. It’s great. When we open up the device, we can see an Infineon (formerly Cypress) CYUSB3014-BZX EZ-USB FX3 SuperSpeed controller. A Saleae Logic Pro uses the same device. These are your to-go-to USB interface chips when you need a microcontroller in addition to the core USB3 functionatility. They’re relatively cheap too, you can get them for $16 in single digital quantities at LCSC.com. The other size of the PCB is much busier. (Click to enlarge) The big ticket components are: a Spartan-6 XC6SLX16 FPGA Reponsible data acquisition, triggering, run-length encoding/compression, data storage to DRAM, and sending data to the CYUSB3014. A Saleae Logic 16 Pro has a smaller Spartan-6 LX9. That makes sense: its triggering options aren’t as advanced as the DSLogic and since it lacks external DDR memory, it doesn’t need a memory controller on the FPGA either. a DDR3-1600 DRAM It’s a Micron MT41K128M16JT-125, marked D9PTK, with 2Gbits of storage and a 16-bit data bus. an Analog Devices ADF4360-7 clock generator I found this a bit surprising. A Spartan-6 LX16 FPGA has 2 clock management tiles (CMT) that each have 1 real PLL and 2 DCMs (digital clock manager) with delay locked loop, digital frequency synthesizer, etc. The VCO of the PLL can be configured with a frequency up to 1080 MHz which should be sufficient to capture signals at 1GHz, but clearly there was a need for something better. The ADF4360-7 can generate an output clock as fast a 1800MHz. There’s obviously an extensive supporting cast: a Macronix MX25R2035F serial flash This is used to configure the FPGA. an SGM2054 DDR termination voltage controller an LM26480 power management unit It has two linear voltage regulators and two step-down DC-DC convertors. two clock oscillators: 24MHz and 19.2MHz a TI HD3SS3220 USB-C Mux This the glue logic that makes it possible for USB-C connectors to be orientation independent. a SP3010-04UTG for USB ESD protection Marked QH4 Two 5x2 pin connectors J7 and J8 on the right size of the PCB are almost certainly used to connect programming and debugging cables to the FPGA and the CYUSB-3014. (Click to enlarge) The Input Circuit I spent a bit of time Ohm-ing out the input circuit. Here’s what I came up with: The cable itself has a 100k Ohm series resistance. Together with a 100k Ohm shunt resistor to ground at the entrance of the PCB it acts as by-two resistive divider. The series resistor also limits the current going into the device. Before passing through a 33 Ohm series resistor that goes into the FPGA, there’s an ESD protection device. I’m not 100% sure, but my guess is that it’s an SRV05-4D-TP or some variant thereof. I’m not 100% sure why the 33 Ohm resistor is there. It’s common to have these type of resistors on high speed lines to avoid reflection but since there’s already a 100k resistor in the path, I don’t think that makes much sense here. It might be there for additional protection of the ESD structure that resides inside the FPGA IOs? A DSLogic has a fully programmable input threshold voltage. If that’s the case, then where’s the opamp to compare the input voltage against this threshold voltage? (There is such a comparator on a Saleae Logic Pro!) The answer to that question is: “it’s in the FPGA!” FPGA IOs can support many different I/O standards: single-ended ones, think CMOS and TTL, and a whole bunch of differential standards too. Differential protocols compare a positive and a negative version of the same signal but nothing prevents anyone from assigning a static value to the negative input of a differential pair and making the input circuit behave as a regular single-end pair with programmable threshold. Like this: There is plenty of literature out there about using the LVDS comparator in single-ended mode. It’s even possible to create pretty fast analog-digital convertors this way, but that’s outside the scope of this blog post. Impact of Input Circuit on Circuit Under Test 7 years ago, OpenTechLab reviewed the DSLogic Plus, the predecessor of the DSLogic U3Pro16. Joel spent a lot of time looking at its input circuit. He mentions a 7.6k Ohm pull-down resistor at the input, different than the 100k Ohm that I measured. There’s no mention of a series resistor in the cable or about the way adjustable thresholds are handled, but I think that the DSLogic Pro has a simular input circuit. His review continues with an in-depth analysis of how measuring a signal can impact the signal itself, he even builds a simulation model of the whole system, and does a real-world comparison between a DSLogic measurement and a fake-Saleae one. While his measurements are convincing, I wasn’t able to repeat his results on a similar setup with a DSLogic U3Pro and a Saleae Logic Pro: for both cases, a 200MHz signal was still good enough. I need to spend a bit more time to better understand the difference between my and his setup… Either way, I recommend watching this video. Additional IOs: External Clock, Trigger In, Trigger Out In addition to the 16 input pins that are used to record data, the DSLogic has 3 special IOs and a seperate 3-wire cable to wire them up. They are marked with the character “OIC” above the connector, which stands for Output, Input, Clock. Clock Instead of using a free-running internal clock, the 16 input signals can be sampled with an external sampling clock. This corresponds to a mode that’s called “state clocking” in big-iron Tektronix and HP/Agilent/Keysight logic analyzers. Using an external clock that is the same as the one that is used to generate the signals that you want to record is a major benefit: you will always record the signal at the right time as long as setup and hold requirements are met. When using a free-running internal sampling clock, the sample rate must a factor of 2 or more higher to get an accurate representation of what’s going on in the system. The DSLogic U16Pro provides the option to sample the data signals at the positive or negative edge of the external clock. On one hand, I would have prefered more options in moving the edge of the clock back and forth. It’s something that should be doable with the DLLs that are part of the DCMs blocks of a Spartan-6. But on the other, external clocking is not supported at all by Saleae analyzers. The maximum clock speed of the external clock input is 50MHz, significantly lower than the free-running sample speed. This is the usually the case as well for big iron logic analyzers. For example, my old Agilent 1670G has a free running sampling clock of 500MHz and supports a maximum state clock of 150MHz. Trigger In According to the manuals: “TI is the input for an external trigger signal”. That’s a great feature, but I couldn’t figure out a way in DSView on how to enable it. After a bit of googling, I found the following comment in an issue on GitHub. This “TI” signal has no function now. It’s reserved for compatible and further extension. This comment is dated July 29, 2018. A closer look at the U3Pro16 datasheets shows the description of the “TI” input as “Reserved”… Trigger Out When a trigger is activated inside the U3Pro, a pulse is generated on this pin. The manual doesn’t give more details, but after futzing around with the horrible oscilloscope UI of my 1670G, I was able to capture a 500ms trigger-out pulse of 1.8V. Software: From Saleae Logic to PulseView to DSView When Saleae first came to market, they raised the bar for logic analyzer software with Logic, which had a GUI that allowed scrolling and zooming in and out of waveforms at blazing speed. Logic also added a few protocol decoders, and an C++ API to create your own decoders. It was the inspiration of PulseView, an open source equivalent that acts as the front-end application of SigRok, an open source library and tool that acts as the waveform data acquisition backend. PulseView supports protocol decoders as well, but it has an easier to use Python API and it allows stacked protocol decoders: a low-level decoder might convert the recorded signals into, say, I2C tokens (start/stop/one/zero). A second decoder creates byte-level I2C transactions out of the tokens. And I2C EPROM decoder could interpret multiple I2C transactions as read and write operations. PulseView has tons of protocol decoders, from simple UART transactions, all the way to USB 2.0 decoders. When the DSLogic logic analyzer hit the market after a successful Kickstarter campaign, it shipped with DSView, DreamSourceLab’s closed source waveform viewer. However, people soon discovered that it was a reskinned version of PulseView, a big no-no since the latter is developed under a GPL3 license. After a bit of drama, DreamSourceLab made DSView available on GitHub under the required GPL3 as well, with attribution to the sigrok project. DSView is a hard fork of PulseView and there are still some bad feelings because DreamSourceLab doesn’t push changes to the PulseView project, but at least they’ve legally in the clear for the past 6 years. The default choice would be to use DSView to control your DSLogic, but Sigrok/PulseView supports DSView as well. In the figure below, you can see DSView in demo mode, no hardware device connected, and an example of the 3 stacked protocol described earlier: (Click to enlarge) For this review, I’ll be using DSView. Saleae has since upgrade Logic to Logic2, and now also supports stacked protocol decoders. It still uses a C++ API though. You can find an example decoder here. Installing DSView on a Linux Machine DreamSourceLab provides DSView binaries for Windows and MacOS binaries but not for Linux. When you click the Download button for Linux, it returns a tar file with the source code, which you’re expected to compile yourself. I wasn’t looking forward to running into the usual issues with package dependencies and build failures, but after following the instructions in the INSTALL file, I ended up with a working executable on first try. DSView UI The UI of DSView is straightforward and similar to Saleae Logic 2. There are things that annoy me in both tools but I have a slight preference for Logic 2. Both DSView and Logic2 have a demo mode that allows you to play with it without a real device attached. If you want to get a feel of what you like better, just download the software and play with it. Some random observations: DSView can pan and zoom in or out just as fast as Logic 2. On a MacBook, the way to navigate through the waveform really rubs me the wrong way: it uses the pinching gesture on a trackpad to zoom in and out. That seems like the obvious way to do it, but since it’s such a common operation to browse through a waveform it slows you down. On my HP Laptop 17, DSView uses the 2 finger slide up and down to zoom in and out which is much faster. Logic 2 also uses the 2 finger slide up and down. The stacked protocol decoders area amazing. Like Logic 2, DSView can export decoded protocols as CSV files, but only one protocol at a time. It would be nice to be able to export multiple protocols in the same CSV file so that you can easier compare transaction flow between interfaces. Logic 2 behaves predictably when you navigate through waveforms while the devices is still acquiring new data. DSView behaves a bit erratic. In DSView, you need to double click on the waveform to set a time marker. That’s easy enough, but it’s not intuitive and since I only use the device occasionally, I need to google every time I take it out of the closet. You can’t assign a text label to a DSView cursors/time marker. None of the points above disquality DSView: it’s a functional and stable piece of software. But I’d be lying if I wrote that DSView is as frictionless and polished as Logic 2. Streaming Data to the Host vs Local Storage in DRAM The Saleae Logic 16 Pro only supports streaming mode: recorded data is immediately sent to the PC to which the device is connected. The U3Pro supports both streaming and buffered mode, where data is written in the DRAM that’s on the device and only transported to the host when the recording is complete. Streaming mode introduces a dependency on the upstream bandwidth. An Infineon FX3 supports USB3 data rates up 5Gbps, but it’s far from certain that those rates are achieved in practice. And if so, it still limits recording 16 channels to around 300MHz, assuming no overhead. In practice, higher rates are possible because both devices support run length encoding (RLE), a compression technique that reduces sequences of the same value to that value and the length of the sequence. Of course, RLE introduces recording uncertainty: high activity rates may result in the exceeding the available bandwidth. The U3Pro has a 16-bit wide 2Gbit DDR3 DRAM with a maximum data rate of 1.6G samples per second. Theoretically, make it possible to record 16 channels with a 1.6GHz sample rate, but that assumes accessing DRAM with 100% efficiency, which is never the case. The GUI has the option of recording 16 signals at 500MHz or 8 signals at 1GHz. Even when recording to the local DRAM, RLE compression is still possible. When RLE is disabled and the highest sample rate is selected, 268ms of data can be recorded. When connected to my Windows laptop, buffered mode worked fine, but on my MacBook Air M2 DSView always hangs when downloading the data that was recorded at high sample rates and I have to kill the application. In practice, I rarely record at high sample rates and I always use streaming mode which works reliably on the Mac too. But it’s not a good look for DSView. Triggers One of the biggest benefits of the U3Pro over a Saleae is their trigger capability. Saleae Logic 2.4.22 offers the following options: You can set a rising edge, falling edge, a high or a low level on 1 signal in combination with some static values on other signals, and that’s it. There’s not even a rising-or-falling edge option. It’s frankly a bit embarrassing. When you have a FPGA at your disposal, triggering functionality is not hard to implement. Meanwhile, even in Simple Trigger mode, the DSLogic can trigger on multiple edges at the same time, something that can be useful when using an external sampling clock. But the DSLogic really shines when enabling the Advanced Trigger option. In Stage Trigger mode, you can create state sequences that are up to 16 phases long, with 2 16-bit comparisons and a counter per stage. Alternatively, Serial Trigger mode is a powerful enough to capture protocols like I2C, as shown below, where a start flag is triggered by a falling edge of SDA when SCL is high, a stop flag by a rising edge of SDA when SCL is high, and data bits are captured on the rising edge of SCL: You don’t always need powerful trigger options, but they’re great to have when you do. Conclusion The U3Pro is not perfect. It doesn’t have an analog mode, buffered mode doesn’t work reliably on my MacBook, and the DSView GUI is a bit quirky. But it is relatively cheap, it has a huge library of decoding protocols, and the triggering modes are excellent. I’ve used it for a few projects now and it hasn’t let me down so far. If you’re in the market for a cheap logic analyzer, give it a good look. References Logic Analyzer Shopping Comparison between Saleae Logic Pro 16, Innomaker LA2016, Innomaker LA5016, DSLogic Plus, and DSLogic U3Pro16 Footnotes It even has the digital storage scope option with 2 analog channels, 500MHz bandwidth and 2GSa/s sampling rate. ↩

yesterday 3 votes