Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
71
A few days ago, on a certain orange website, I came across an article about an improvised parallel printer capture device. This contains the line: There are other projects out there, but when you google for terms such as "parallel port to usb", they drown in a sea of "USB to parallel port" results! While the author came up with a perfectly elegant and working solution, on reading that article I immediately thought "aren't they just being an idiot? why not just use a USB parallel port controller?" Well, this spurred me to do some further reading on the humble parallel port, and it turns out that it is possible, although not certain, that I am in fact the idiot. What I immediately assumed---that you could use a USB parallel controller to receive the bytes sent on a parallel printer interface---is probably actually true, but it would depend on the specific configuration of the parallel controller in question and it seems likely that inexpensive USB parallel adapters may not be capable. I...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from computers are bad

2025-03-01 the cold glow of tritium

I have been slowly working on a book. Don't get too excited, it is on a very niche topic and I will probably eventually barely finish it and then post it here. But in the mean time, I will recount some stories which are related, but don't quite fit in. Today, we'll learn a bit about the self-illumination industry. At the turn of the 20th century, it was discovered that the newfangled element radium could be combined with a phosphor to create a paint that glowed. This was pretty much as cool as it sounds, and commercial radioluminescent paints like Undark went through periods of mass popularity. The most significant application, though, was in the military: radioluminescent paints were applied first to aircraft instruments and later to watches and gunsights. The low light output of radioluminescent paints had a tactical advantage (being very difficult to see from a distance), while the self-powering nature of radioisotopes made them very reliable. The First World War was thus the "killer app" for radioluminescence. Military demand for self-illuminating devices fed a "radium rush" that built mines, processing plants, and manufacturing operations across the country. It also fed, in a sense much too literal, the tragedy of the "Radium Girls." Several self-luminous dial manufacturers knowingly subjected their women painters to shockingly irresponsible conditions, leading inevitably to radium poisoning that disfigured, debilitated, and ultimately killed. Today, this is a fairly well-known story, a cautionary tale about the nuclear excess and labor exploitation of the 1920s. That the situation persisted into the 1940s is often omitted, perhaps too inconvenient to the narrative that a series of lawsuits, and what was essentially the invention of occupational medicine, headed off the problem in the late 1920s. What did happen after the Radium Girls? What was the fate of the luminous radium industry? A significant lull in military demand after WWI was hard on the radium business, to say nothing of a series of costly settlements to radium painters despite aggressive efforts to avoid liability. At the same time, significant radium reserves were discovered overseas, triggering a price collapse that closed most of the mines. The two largest manufacturers of radium dials, Radium Dial Company (part of Standard Chemical who owned most radium mines) and US Radium Corporation (USRC), both went through lean times. Fortunately, for them, the advent of the Second World War reignited demand for radioluminescence. The story of Radium Dial and USRC doesn't end in the 1920s---of course it doesn't, luminous paints having had a major 1970s second wind. Both companies survived, in various forms, into the current century. In this article, I will focus on the post-WWII story of radioactive self-illumination and the legacy that we live with today. During its 1920s financial difficulties, the USRC closed the Orange, New Jersey plant famously associated with Radium Girls and opened a new facility in Brooklyn. In 1948, perhaps looking to manage expenses during yet another post-war slump, USRC relocated again to Bloomsburg, Pennsylvania. The Bloomsburg facility, originally a toy factory, operated through a series of generational shifts in self-illuminating technology. The use of radium, with some occasional polonium, for radioluminescence declined in the 1950s and ended entirely in the 1970s. The alpha radiation emitted by those elements is very effective in exciting phosphors but so energetic that it damages them. A longer overall lifespan, and somewhat better safety properties, could be obtained by the use of a beta emitter like strontium or tritium. While strontium was widely used in military applications, civilian products shifted towards tritium, which offered an attractive balance of price and half life. USRC handled almost a dozen radioisotopes in Bloomsburg, much of them due to diversified operations during the 1950s that included calibration sources, ionizers, and luminous products built to various specific military requirements. The construction of a metal plating plant enabled further diversification, including foil sources used in research, but eventually became an opportunity for vertical integration. By 1968, USRC had consolidated to only tritium products, with an emphasis on clocks and watches. Radioluminescent clocks were a huge hit, in part because of their practicality, but fashion was definitely a factor. Millions of radioluminescent clocks were sold during the '60s and '70s, many of them by Westclox. Westclox started out as a typical clock company (the United Clock Company in 1885), but joined the atomic age through a long-lived partnership with the Radium Dial Company. The two companies were so close that they became physically so: Radium Dial's occupational health tragedy played out in Ottawa, Illinois, a town Radium Dial had chosen as its headquarters due to its proximity to Westclox in nearby Peru [1]. Westclox sold clocks with radioluminescent dials from the 1920s to probably the 1970s, but one of the interesting things about this corner of atomic history is just how poorly documented it is. Westclox may have switched from radium to tritium at some point, and definitely abandoned radioisotopes entirely at some point. Clock and watch collectors, a rather avid bunch, struggle to tell when. Many consumer radioisotopes are like this: it's surprisingly hard to know if they even are radioactive. Now, the Radium Dial Company itself folded entirely to a series of radium poisoning lawsuits in the 1930s. Simply being found guilty of one of the most malevolent labor abuses of the era would not stop free enterprise, though, and Radium Dial's president founded a legally distinct company called Luminous Processes just down the street. Luminous Processes is particularly notable for having continued the production of radium-based clock faces until 1978, making them the last manufacturer of commercial radioluminescent radium products. This also presents compelling circumstantial evidence that Westclox continued to use radium paint until sometime around 1978, which lines up with the general impressions of luminous dial collectors. While the late '70s were the end of Radium Dial, USRC was just beginning its corporate transformation. From 1980 to 1982, a confusing series of spinoffs and mergers lead to USR Industries, parent company of Metreal, parent company of Safety Light Corporation, which manufactured products to be marketed and distributed by Isolite. All of these companies were ultimately part of USR Industries, the former USRC, but the org chart sure did get more complex. The Nuclear Regulatory Commission expressed some irritation in their observation, decades later, that they weren't told about any of this restructuring until they noticed it on their own. Safety Light, as expressed by the name, focused on a new application for tritium radioluminescence: safety signage, mostly self-powered illuminated exit signs and evacuation signage for aircraft. Safety Light continued to manufacture tritium exit signs until 2007, when they shut down following some tough interactions with the NRC and the EPA. They had been, in the fashion typical of early nuclear industry, disposing of their waste by putting it in a hole in the ground. They had persisted in doing this much longer than was socially acceptable, and ultimately seem to have been bankrupted by their environmental obligations... obligations which then had to be assumed by the Superfund program. The specific form of illumination used in these exit signs, and by far the most common type of radioluminescence today, is the Gaseous Tritium Light Source or GTLS. GTLS are small glass tubes or vials, usually made with borosilicate glass, containing tritium gas and an internal coating of phosphor. GTLS are simple, robust, and due to the very small amount of tritium required, fairly inexpensive. They can be made large enough to illuminate a letter in an exit sign, or small enough to be embedded into a watch hand. Major applications include watch faces, gun sights, and the keychains of "EDC" enthusiasts. Plenty of GTLS manufacturers have come and gone over the years. In the UK, defense contractor Saunders-Roe got into the GTLS business during WWII. Their GTLS product line moved to Brandhurst Inc., which had a major American subsidiary. It is an interesting observation that the US always seems to have been the biggest market for GTLS, but their manufacture has increasingly shifted overseas. Brandhurst is no longer even British, having gone the way of so much of the nuclear world by becoming Canadian. A merger with Canadian company SRB created SRB Technologies in Pembroke, Ontario, which continues to manufacture GTLS today. Other Canadian GTLS manufacturers have not fared as well. Shield Source Inc., of Peterborough, Ontario, began filling GTLS vials in 1987. I can't find a whole lot of information on Shield Source's early days, but they seem to have mostly made tubes for exit signs, and perhaps some other self-powered signage. In 2012, the Canadian Nuclear Safety Commission (CNSC) detected a discrepancy in Shield Source's tritium emissions monitoring. I am not sure of the exact details, because CNSC seems to make less information public in general than the US NRC [2]. Here's what appears to have happened: tritium is a gas, which makes it tricky to safely handle. Fortunately, the activity of tritium is relatively low and its half life is relatively short. This means that it's acceptable to manage everyday leakage (for example when connecting and disconnecting things) in a tritium workspace by ventilating it to a stack, releasing it to the atmosphere for dilution and decay. The license of a tritium facility will specify a limit for how much radioactivity can be released this way, and monitoring systems (usually several layers of monitoring systems) have to be used to ensure that the permit limit is not exceeded. In the case of Shield Source, some kind of configuration error with the tritium ventilation monitoring system combined with a failure to adequately test and audit it. The CNSC discovered that during 2010 and 2011, the facility had undercounted their tritium emissions, and in fact exceeded the limits of their license. Air samplers located around the facility, some of which were also validated by an independent laboratory, did not detect tritium in excess of the environmental limits. This suggests that the excess releases probably did not have an adverse impact on human health or the environment. Still, exceeding license terms and then failing to report and correct the problem for two years is a very serious failure by a licensee. In 2012, when the problem was discovered, CNSC ordered Shield Source's license modified to prohibit actual tritium handling. This can seem like an odd maneuver but something similar can happen in the US. Just having radioisotope-contaminated equipment, storing test sources, and managing radioactive waste requires a license. By modifying Shield Source's license to prohibit tritium vial filling, the CNSC effectively shut the plant down while allowing Shield Source to continue their radiological protection and waste management functions. This is the same reason that long-defunct radiological facilities often still hold licenses from NRC in the US: they retain the licenses to allow them to store and process waste and contaminated materials during decommissioning. In the case of Shield Source, while the violation was serious, CNSC does not seem to have anticipated a permanent shutdown. The terms agreed in 2012 were that Shield Source could regain a license to manufacture GTLS if it produced for CNSC a satisfactory report on the root cause of the failure and actions taken to prevent a recurrence. Shield Source did produce such a report, and CNSC seems to have mostly accepted it with some comments requesting further work (the actual report does not appear to be public). Still, in early 2013, Shield Source informed CNSC that it did not intend to resume manufacturing. The license was converted to a one-year license to facilitate decommissioning. Tritium filling and ventilation equipment, which had been contaminated by long-term exposure to tritium, was "packaged" and disposed. This typically consists of breaking things down into parts small enough to fit into 55-gallon drums, "overpacking" those drums into 65-gallon drums for extra protection, and then coordinating with transportation authorities to ship the materials in a suitable way to a facility licensed to dispose of them. This is mostly done by burying them in the ground in an area where the geology makes groundwater interaction exceedingly unlikely, like a certain landfill on the Texas-New Mexico border near Eunice. Keep in mind that tritium's short half life means this is not a long-term geological repository situation; the waste needs to be safely contained for only, say, fifty years to get down to levels not much different from background. I don't know where the Shield Source waste went, CNSC only says it went to a licensed facility. Once the contaminated equipment was removed, drywall and ceiling and floor finishes were removed in the tritium handling area and everything left was thoroughly cleaned. A survey confirmed that remaining tritium contamination was below CNSC-determined limits (for example, in-air concentrations that would lead to a dose of less than 0.01 mSv/year for 9-5 occupational exposure). At that point, the Shield Source building was released to the landlord they had leased it from, presumably to be occupied by some other company. Fortunately tritium cleanup isn't all that complex. You might wonder why Shield Source abruptly closed down. I assume there was some back-and-forth with CNSC before they decided to throw in the towel, but it is kind of odd that they folded entirely during the response to an incident that CNSC seems to have fully expected them to survive. I suspect that a full year of lost revenue was just too much for Shield Source: by 2012, when all of this was playing out, the radioluminescence market had seriously declined. There are a lot of reasons. For one, the regulatory approach to tritium has become more and more strict over time. Radium is entirely prohibited in consumer goods, and the limit on tritium activity is very low. Even self-illuminating exit signs now require NRC oversight in the US, discussed shortly. Besides, public sentiment has increasingly turned against the Friendly Atom is consumer contexts, and you can imagine that people are especially sensitive to the use of tritium in classic institutional contexts for self-powered exit signs: schools and healthcare facilities. At the same time, alternatives have emerged. Non-radioactive luminescent materials, the kinds of things we tend to call "glow in the dark," have greatly improved since WWII. Strontium aluminate is a typical choice today---the inclusion of strontium might suggest otherwise, but strontium aluminate uses the stable natural isotope of strontium, Sr-88, and is not radioactive. Strontium aluminate has mostly displaced radioluminescence in safety applications, and for example the FAA has long allowed it for safety signage and path illumination on aircraft. Keep in mind that these luminescent materials are not self-powered. They must be "charged" by exposure to light. Minor adaptations are required, for example a requirement that the cabin lights in airliners be turned on for a certain period of time before takeoff, but in practice these limitations are considered preferable to the complexity and risks involved in the use of radioisotopes. You are probably already thinking that improving electronics have also made radioluminescence less relevant. Compact, cool-running, energy-efficient LEDs and a wide variety of packages and form factors mean that a lot of traditional applications of radioluminescence are now simply electric. Here's just a small example: in the early days of LCD digital watches, it was not unusual for higher-end models to use a radioluminescent source as a backlight. Today that's just nonsensical, a digital watch needs a power source anyway and in even the cheapest Casios a single LED offers a reasonable alternative. Radioluminescent digital watches were very short lived. Now that we've learned about a few historic radioluminescent manufacturers, you might have a couple of questions. Where were the radioisotopes actually sourced? And why does Ontario come up twice? These are related. From the 1910s to the 1950s, radioluminescent products were mostly using radium sourced from Standard Chemical, who extracted it from mines in the Southwest. The domestic radium mining industry collapsed by 1955 due to a combination of factors: declining demand after WWII, cheaper radium imported from Brazil, and a broadly changing attitude towards radium that lead the NRC to note in the '90s that we might never again find the need to extract radium: radium has a very long half life that makes it considerably more difficult to manage than strontium or tritium. Today, you could say that the price of radium has gone negative, in that you are far more likely to pay an environmental management company to take it away (at rather high prices) than to buy more. But what about tritium? Tritium is not really naturally occurring; there technically is some natural tritium but it's at extremely low concentrations and very hard to get at. But, as it happens, irradiating water produces a bit of tritium, and nuclear reactors incidentally irradiate a lot of water. With suitable modifications, the tritium produced as a byproduct of civilian reactors can be concentrated and sold. Ontario Hydro has long had facilities to perform this extraction, and recently built a new plant at the Darlington Nuclear Station that processes heavy water shipped from CANDU reactors throughout Ontario. The primary purpose of this plant is to reduce environmental exposure from the release of "tritiated" heavy water; it produces more tritium than can reasonably be sold, so much of it is stored for decay. The result is that tritium is fairly abundant and cheap in Ontario. Besides SRB Technologies which packages tritium from Ontario Hydro into GTLS, another major manufacturer of GTLS is the Swiss company mb-microtec. mb-microtec is the parent of watch brand Traser and GTLS brand Trigalight, and seem to be one of the largest sources of consumer GTLS overall. Many of the tritium keychains you can buy, for example, use tritium vials manufactured by mb-microtec. NRC documents suggest that mb-microtec contracts a lot of their finished product manufacturing to a company in Hong Kong and that some of the finished products you see using their GTLS (like watches and fobs) are in fact white-labeled from that plant, but unfortunately don't make the original source of the tritium clear. mb-microtec has the distinction of operating the only recycling plant for tritium gas, and press releases surrounding the new recycling operation say they purchase the rest of their tritium supply. I assume from the civilian nuclear power industry in Switzerland, which has several major reactors operating. A number of other manufacturers produce GTLS primarily for military applications, with some safety signage side business. And then there is, of course, the nuclear weapons program, which consumes the largest volume of tritium in the US. The US's tritium production facility for much of the Cold War actually shut down in 1988, one of the factors in most GTLS manufacturers being overseas. In the interim period, the sole domestic tritium supply was recycling of tritium in dismantled weapons and other surplus equipment. Since tritium has such a short half-life, this situation cannot persist indefinitely, and tritium production was resumed in 2004 at the Tennessee Valley Authority's Watts Bar nuclear generating station. Tritium extracted from that plant is currently used solely by the Department of Energy, primarily for the weapons program. Finally, let's discuss the modern state of radioluminescence. GTLS, based on tritium, are the only type of radioluminescence available to consumers. All importation and distribution of GTLS requires an NRC license, although companies that only distribute products that have been manufactured and tested by another licensee fall under a license exemption category that still requires NRC reporting but greatly simplifies the process. Consumers that purchase these items have no obligations to the NRC. Major categories of devices under these rules include smoke detectors, detection instruments and small calibration sources, and self-luminous products using tritium, krypton, or promethium. You might wonder, "how big of a device can a I buy under these rules?" The answer to that question is a bit complicated, so let me explain my understanding of the rules using a specific example. Let's say you buy a GTLS keychain from massdrop or wherever people get EDC baubles these days [3]. The business you ordered it from almost certainly did not make it, and is acting as an NRC exempt distributor of a product. In NRC terms, your purchase of the product is not the "initial sale or distribution," that already happened when the company you got it from ordered it from their supplier. Their supplier, or possibly someone further up in the chain, does need to hold a license: an NRC specific license is required to manufacture, process, produce, or initially transfer or sell tritium products. This is the reason that overseas companies like SRB and mb-microtec hold NRC licenses; this is the only way for consumers to legally receive their products. It is important to note the word "specific" in "NRC specific license." These licenses are very specific; the NRC approves each individual product including the design of the containment and and labeling. When a license is issued, the individual products are added to a registry maintained by the NRC. When evaluating license applications, the NRC considers a set of safety objectives rather than specific criteria. For example, and if you want to read along we're in 10 CFR 32.23: In normal use and disposal of a single exempt unit, it is unlikely that the external radiation dose in any one year, or the dose commitment resulting from the intake of radioactive material in any one year, to a suitable sample of the group of individuals expected to be most highly exposed to radiation or radioactive material from the product will exceed the dose to the appropriate organ as specified in Column I of the table in § 32.24 of this part. So the rules are a bit soft, in that a licensee can argue back and forth with the NRC over means of calculating dose risk and so on. It is, ultimately, the NRC's discretion as to whether or not a device complies. It's surprisingly hard to track down original licensing paperwork for these products because of how frequently they are rebranded, and resellers never seem to provide detailed specifications. I suspect this is intentional, as I've found some cases of NRC applications that request trade secret confidentiality on details. Still, from the license paperwork I've found with hard numbers, it seems like manufacturers keep the total activity of GTLS products (e.g. a single GTLS sold alone, or the total of the GTLS in a watch) under 25 millicurie. There do exist larger devices, of which exit signs are the largest category. Self-powered exit signs are also manufactured under NRC specific licenses, but their activity and resulting risk is too high to qualify for exemption at the distribution and use stage. Instead, all users of self-powered safety signs do so under a general license issued by the NRC (a general license meaning that it is implicitly issued to all such users). The general license is found in 10 CFR 31. Owners of tritium exit signs are required to designate a person to track and maintain the signs, inform the NRC of that person's contact information and any changes in that person, to inform the NRC of any lost, stolen, or damaged signs. General licensees are not allowed to sell or otherwise transfer tritium signs, unless they are remaining in the same location (e.g. when a building is sold), in which case they must notify the NRC and disclose NRC requirements to the transferee. When tritium exit signs reach the end of their lifespan, they must be disposed of by transfer to an NRC license holder who can recycle them. The general licensee has to notify the NRC of that transfer. Overall, the intent of the general license regulations is to ensure that they are properly disposed of: reporting transfers and events to the NRC, along with serial numbers, allows the NRC to audit for signs that have "disappeared." Missing tritium exit signs are a common source of NRC event reports. It should also be said that, partly for these reasons, tritium exit signs are pretty expensive. Roughly $300 for a new one, and $150 to dispose of an old one. Other radioluminescent devices you will find are mostly antiques. Radium dials are reasonably common, anything with a luminescent dial made before, say, 1960 is probably radium, and specifically Westclox products to 1978 likely use radium. The half-life of radium-226 is 1,600 years, so these radium dials have the distinction of often still working, although the paints have usually held up more poorly than the isotopes they contain. These items should be handled with caution, since the failure of the paint creates the possibility of inhaling or ingesting radium. They also emit radon as a decay product, which becomes hazardous in confined spaces, so radium dials should be stored in a well-ventilated environment. Strontium-90 has a half-life of 29 years, and tritium 12 years, so vintage radioluminescent products using either have usually decayed to the extent that they no longer shine brightly or even at all. The phosphors used for these products will usually still fluoresce brightly under UV light and might even photoluminesce for a time after light exposure, but they will no longer stay lit in a dark environment. Fortunately, the decay that makes them not work also makes them much safer to handle. Tritium decays to helium-3 which is quite safe, strontium-90 to yttrium-90 which quickly decays to zirconium-90. Zirconium-90 is stable and only about as toxic as any other heavy metal. You can see why these radioisotopes are now much preferred over radium. And that's the modern story of radioluminescence. Sometime soon, probably tomorrow, I will be sending out my supporter's newsletter, EYES ONLY, with some more detail on environmental remediation at historic processing facilities for radioluminescent products. You can learn a bit more about how US Radium was putting their waste in a hole in the ground, and also into a river, and sort of wherever else. You know Radium Dial Company was up to similar abuses. [1] The assertion that Ottawa is conveniently close to Peru is one of those oddities of naming places after bigger, more famous places. [2] CNSC's whole final report on Shield Source is only 25 pages. A similar decommissioning process in the US would produce thousands of pages of public record typically culminating in EPA Five Year Reviews which would be, themselves, perhaps a hundred pages depending on the amount of post-closure monitoring. I'm not familiar with the actual law but it seems like most of the difference is that CNSC does not normally publish technical documentation or original data (although one document does suggest that original data is available on request). It's an interesting difference... the 25-page report, really only 20 pages after front matter, is a lot more approachable for the public than a 400 page set of close-out reports. Much of the standard documentation in the US comes from NEPA requirements, and NEPA is infamous in some circles for requiring exhaustive reports that don't necessarily do anything useful. But from my perspective it is weird for the formal, published documentation on closure of a radiological site to not include hydrology discussion, demographics, maps, and fifty pages of data tables as appendices. Ideally a bunch of one-sentence acceptance emails stapled to the end for good measure. When it comes to describing the actual problem, CNSC only gives you a couple of paragraphs of background. [3] Really channeling Guy Debord with my contempt for keychains here. during the writing of this article, I bought myself a tritium EDC bauble, so we're all in the mud together.

3 days ago 5 votes
2025-02-17 of psychics and securities

September 6th, 1996. Eddie Murray, of the Baltimore Orioles, is at bat. He has had 20 home runs in the season; 499 in his career. Anticipation for the 500th had been building for the last week. It would make Murray only the third player to reach 500 home runs and 3000 hits. His career RBI would land in the top ten hitters in the history of the sport; his 500th home run was a statistical inevitability. Less foreseeable was the ball's glancing path through one of the most famous stories of the telephone business. Statistics only tell you what might happen. Michael Lasky had made a career, a very lucrative one, of telling people what would happen. Lasky would have that ball. As usual, he made it happen by advertising. Clearing right field, the ball landed in the hands of Dan Jones, a salesman from Towson, Maryland. Despite his vocation, he didn't immediately view his spectacular catch in financial terms. He told a newspaper reporter that he looked forward to meeting Murray, getting some signatures, some memorabilia. Instead, he got offers. At least three parties inquired about purchasing the ball, but the biggest offer came far from discreetly: an ad in the Baltimore Sun offering half a million dollars to whoever had it. Well, the offer was actually for a $25,000 annuity for 20 years, with a notional cash value of half a million but a time-adjusted value of $300,000 or less. I couldn't tell for sure, but given events that would follow, it seems unlikely that Jones ever received more than a few of the payments anyway. Still, the half a million made headlines, and NPV or not the sale price still set the record for a public sale of sports memorabilia. Lasky handled his new purchase with his signature sense of showmanship. He held a vote, a telephone vote: two 1-900 numbers, charging $0.95 a call, allowed the public to weigh in on whether he should donate the ball to the Babe Ruth Birthplace museum or display it in the swanky waterfront hotel he part-owned. The proceeds went to charity, and after the museum won the poll, the ball did too. The whole thing was a bit of a publicity stunt, Lasky thrived on unsubtle displays and he could part with the money. His 1-900 numbers were bringing in over $100 million a year. Lasky's biography is obscure. Born 1942 in Brooklyn, he moved to Baltimore in the 1960s for some reason connected to a conspicuous family business: a blood bank. Perhaps the blood bank was a grift, it's hard to say now, but Lasky certainly had a unique eye for business. He was fond of horse racing, or really, of trackside betting. His father, a postal worker, had a proprietary theory of mathematics that he applied to predicting the outcome of the race. This art, or science, or sham, is called handicapping, and it became Lasky's first real success. Under the pseudonym Mike Warren, he published the Baltimore Bulletin, a handicapping newsletter advertising sure bets at all the region's racetracks. Well, there were some little details of this business, some conflicts of interest, a little infringement on the trademark of the Preakness. The details are neither clear nor important, but he had some trouble with the racing commissions in at least three states. He probably wouldn't have tangled with them at all if he weren't stubbornly trying to hold down a license to breed racehorses while also running a betting cartel, but Lasky was always driven more by passion than reason. Besides, he had other things going. Predicting the future in print was sort of an industry vertical, and he diversified. His mail-order astrology operation did well, before the Postal Service shut it down. He ran some sort of sports pager service, probably tied to betting, and I don't know what came of that. Perhaps on the back of a new year's resolution, he ran a health club, although it collapsed in 1985 with a bankruptcy case that revealed some, well, questionable practices. Strange that a health club just weeks away from bankruptcy would sell so many multi-year memberships, paid up front. And where did that money go, anyway? No matter, Lasky was onto the next thing. During the 1980s, changes had occurred that would grow Lasky's future-predicting portfolio into a staple of American media. First, in 1984, a Reagan-era FCC voted to end most regulation of television advertising. Gone was the limit of 16 minutes per hour of paid programming. An advertiser could now book entire half-hour schedule slots. Second, during the early '80s AT&T standardized and promoted a new model in telephone billing. The premium-rate number, often called a "1-900 number" after the NPA assigned for their use, incurred a callee-determined per-minute toll that the telco collected and paid on to the callee. It's a bit like a nascent version of "Web 3.0": telephone microtransactions, an innovative new way to pay for information services. It seems like a fair assumption that handicapping brought Lasky to the 1-900 racket, and he certainly did offer betting tip lines. But he had learned a thing or two from the astrology business, even if it ran afoul of Big Postal. Handicapping involved a surprising amount of work, and its marketing centered around the supposedly unique insight of the handicapper. Fixed recordings of advice could only keep people on a telephone line for so long, anyway. Astrology, though, involved even fewer facts, and even more opportunity to ramble. Best of all, there was an established industry of small-time psychics working out of their homes. With the magic of the telephone, every one of them could offer intuitive readings to all of America, for just $3.99 a minute. In 1990, Lasky's new "direct response marketing" company Inphomation, Inc. contracted five-time Grammy winner Dionne Warwick, celebrity psychic Linda Georgian, and a studio audience to produce a 30 minute talk-show "infomercial" promoting the Psychic Friends Network. Over the next few years, Inphomation conjoined with an ad booking agency and a video production company under the ownership of Mike Lasky's son, Marc Lasky. Inphomation spent as much as a million a week in television bookings, promoting a knitting machine and a fishing lure and sports tips, but most of all psychics. The original half-hour Psychic Friends Network spot is often regarded as the most successful infomercial in history. It remade Warwick's reputation, turning her from a singer to a psychic promoter. Calls to PFN's 1-900 number, charged at various rates that could reach over $200 an hour, brought in $140 million in revenue in its peak years of the mid 1990s. Lasky described PFN as an innovative new business model, but it's one we now easily recognize as "gig work." Telephone psychics, recruited mostly by referral from the existing network, worked from home, answering calls on their own telephones. Some read Tarot, some gazed into crystals, others did nothing at all, but the important thing was that they kept callers on the line. After the phone company's cut and Inphomation's cut, they were paid a share of the per-minute rate that automatically appeared on caller's monthly phone bills. A lot of people, and even some articles written in the last decade, link the Psychic Friends Network to "Miss Cleo." There's sort of a "Berenstain Bears" effect happening here; as widely as we might remember Miss Cleo's PFN appearances there are no such thing. Miss Cleo was actually the head psychic and spokeswoman of the Psychic Reader's Network, which would be called a competitor to the Psychic Friends Network except that they didn't operate at the same time. In the early '00s, the Psychic Reader's Network collapsed in scandal. The limitations of its business model, a straightforward con, eventually caught up. It was sued out of business by a dozen states, then the FTC, then the FCC just for good measure. The era of the 1-900 number was actually rather short. By the late '80s, it had already become clear that the main application of premium rate calling was not stock quotations or paid tech support or referral services. It was scams. An extremely common genre of premium rate number, almost the lifeblood of the industry, were joke lines that offered telephonic entertainment in the voice of cartoon characters. Advertisements for these numbers, run during morning cartoons, advised children to call right away. Their parents wouldn't find out until the end of the month, when the phone bill came and those jokes turned out to have run $50 in 1983's currency. Telephone companies were at first called complicit in the grift, but eventually bowed to pressure and, in 1987, made it possible for consumers to block 1-900 calling on their phone service. Of course, few telephone customers took advantage, and the children's joke line racket went on into the early '90s when a series of FTC lawsuits finally scared most of them off the telephone network. Adult entertainment was another touchstone of the industry, although adult lines did not last as long on 1-900 numbers as we often remember. Ripping off adults via their children is one thing; smut is a vice. AT&T and MCI, the dominant long distance carriers and thus the companies that handled most 1-900 call volume, largely cut off phone sex lines by 1991. Congress passed a law requiring telephone carriers to block them by default anyway, but of course left other 1-900 services as is. Phone sex lines were far from gone, of course, but they had to find more nuanced ways to make their revenue: international rates and complicit telephone carriers, dial-around long distance revenue, and whatever else they could think of that regulators hadn't caught up to yet. When Miss Cleo and her Psychic Reader's Network launched in 1997, psychics were still an "above board" use of the 1-900 number. The Psychic Readers lived to see the end of that era. In the late '90s, regulations changed to make unpaid 1-900 bills more difficult to collect. By 2001, some telephone carriers had dropped psychic lines from their networks as a business decision. The bill disputes simply weren't worth the hassle. In 2002, AT&T ended 1-900 billing entirely. Other carriers maintained premium-rate billing for a decade later, but AT&T had most of the customer volume anyway. The Psychic Friends Network, blessed by better vision, struck at the right time. 1990 to 1997 were the golden age of 1-900 and the golden age of Inphomation. Inphomation's three-story office building in Baltimore had a conference room with a hand-painted ceiling fresco of cherubs and clouds. In the marble-lined lobby, a wall of 25 televisions played Inphomation infomercials on repeat. At its peak, the Psychic Friends Network routed calls to 2,000 independent psychic contractors. Dionne Warwick and Linda Georgian were famous television personalities; Warwick wasn't entirely happy about her association with the brand but she made royalties whenever the infomercial aired. Some customers spent tens of thousands of dollars on psychic advice. In 1993, a direct response marketing firm called Regal Communications made a deal to buy Inphomation. The deal went through, but just the next year Regal spun their entire 1-900 division off, and Inphomation exercised an option to become an independent company once again. A decade later, many of Regal's executives would face SEC charges over the details of Regal's 1-900 business, foreshadowing a common tendency of Psychic Friends Network owners. The psychic business, it turns out, was not so unlike the handicapping business. Both were unsavory. Both made most of their money off of addicts. In the press, Lasky talked about casual fans that called for two minutes here and there. What's $5 for a little fun? You might even get some good advice. Lawsuits, regulatory action, and newspaper articles told a different story. The "30 free minutes" promotion used to attract new customers only covered the first two minutes of each call, the rest were billed at an aggressive rate. The most important customers stayed on the line for hours. Callers had to sit through a few minutes of recordings, charged at the full rate, before being connected to a psychic who drew out the conversation by speaking slowly and asking inane questions. Some psychics seem to have approached their job rather sincerely, but others apparently read scripts. And just like the horse track, the whole thing moved a lot of money. Lasky continued to tussle with racing commissions over his thoroughbred horses. He bought a Mercedes, a yacht, a luxury condo, a luxury hotel whose presidential suite he used as an apartment, a half-million-dollar baseball. Well, a $300,000 baseball, at least. Eventually, the odds turned against Lasky. Miss Cleo's Psychic Reader's Network was just one of the many PFN lookalikes that popped up in the late '90s. There was a vacuum to fill, because in 1997, Inphomation was descending into bankruptcy. Opinions differ on Lasky's management and leadership. He was a visionary at least once, but later decisions were more variable. Bringing infomercial production in-house through his son's Pikesville Pictures might have improved creative control, but production budgets ballooned and projects ran late. PFN was still running mainly off of the Dionne Warwick shows, which were feeling dated, especially after a memorable 1993 Saturday Night Live parody featuring Christopher Walken. Lasky's idea for a radio show, the Psychic Friends Radio Network, had a promising trial run but then faltered on launch. Hardly a half dozen radio stations picked it up, and it lost Inphomation tens of millions of dollars. While they were years ahead of the telephone industry cracking down on psychics, PFN still struggled with a timeless trouble of the telephone network: billing. AT&T had a long-established practice of withholding a portion of 1-900 revenue for chargebacks. Some customers see the extra charges on their phone bills and call in with complaints; the telephone company, not really being the beneficiary of the revenue anyway, was not willing to go to much trouble to keep it and often agreed to a refund. Holding, say, 10% of a callee's 1-900 billings in reserve allowed AT&T to offer these refunds without taking a loss. The psychic industry, it turned out, was especially prone to end-of-month customer dissatisfaction. Chargebacks were so frequent that AT&T raised Inphomation's withholding to 20%, 30%, and even 40% of revenue. At least, that's how AT&T told it. Lasky always seemed skeptical, alleging that the telephone companies were simply refusing to hand over money Inphomation was owed, making themselves a free loan. Inphomation brokered a deal to move their business elsewhere, signing an exclusive contract with MCI. MCI underdelivered: they withheld just as much revenue, in violation of the contract according to Lasky, and besides the MCI numbers suffered from poor quality and dropped calls. At least, that's how Inphomation told it. Maybe the dropped calls were on Inphomation's end, and maybe they had a convenient positive effect on revenue as callers paid for a few minute of recordings before being connected to no one at all. By the time the Psychic Friends Network fell apart, there was a lot of blame passed around. Lasky would eventually prevail in a lawsuit against MCI for unpaid revenue, but not until too late. By some combination of a lack of innovation in their product, largely unchanged since 1991, and increasing expenses for both advertising and its founder's lifestyle, Inphomation ended 1997 over $20 million in the red. In 1998 they filed for Chapter 11, and Lasky sought to reorganize the company as debtor-in-possession. The bankruptcy case brought out some stories of Lasky's personal behavior. While some employees stood by him as a talented salesman and apt finder of opportunities, others had filed assault charges. Those charges were later dropped, but by many accounts, he had quite a temper. Lasky's habit of not just carrying but brandishing a handgun around the office certainly raised eyebrows. Besides, his expensive lifestyle persisted much too far into Inphomation's decline. The bankruptcy judge's doubts about Lasky reached a head when it was revealed that he had tried to hide the company's assets. Much of the infrastructure and intellectual property of the Psychic Friends Network, and no small amount of cash, had been transferred to the newly formed Friends of Friends LLC in the weeks before bankruptcy. The judge also noticed some irregularities. The company controller had been sworn in as treasurer, signed the bankruptcy petition, and then resigned as treasurer in the space of a few days. When asked why the company chose this odd maneuver over simply having Lasky, corporate president, sign the papers, Lasky had trouble recalling the whole thing. He also had trouble recalling loans Inphomation had taken, meetings he had scheduled, and actions he had taken. When asked about Inphomation's board of directors, Lasky didn't know who they were, or when they had last met. The judge used harsh language. "I've seen nothing but evidence of concealment, dishonesty, and less than full disclosure... I have no hope this debtor can reorganize with the present management." Lasky was removed, and a receiver appointed to manage Inphomation through a reorganization that quietly turned into a liquidation. And that was almost the end of the Psychic Friends Network. The bankruptcy is sometimes attributed to Lasky's failure to adapt to the times, but PFN wasn't entirely without innovation. The Psychic Friends Network first went online, at psychicfriendsnetwork.com, in 1997. This website, launched in the company's final days, offered not only the PFN's 1-900 number but a landing page for a telephone-based version of "Colorgenics." Colorgenics was a personality test based on the "Lüscher color test," an assessment designed by a Swiss psychotherapist based on nothing in particular. There are dozens of colorgenics tests online today, many of which make various attempts to extract money from the user, but none with quite the verve of a color quiz via 1-900 number. Inphomation just didn't quite make it in the internet age, or at least not directly. Most people know 1998 as the end of the Psychic Friends Network. The Dionne Warwick infomercials were gone, and that was most of PFN anyway. Without Linda Georgian, could PFN live on? Yes, it turns out, but not in its living form. The 1998 bankruptcy marked PFN's transition from a scam to the specter of a scam, and then to a whole different kind of scam. It was the beginning of the PFN's zombie years. In 1999, Inphomation's assets were liquidated at auction for $1.85 million, a far cry from the company's mid-'90s valuations in the hundreds of millions. The buyer: Marc Lasky, Michael Lasky's son. PFN assets became part of PFN Holdings Inc., with Michael Lasky and Marc Lasky as officers. PFN was back. It does seem that the Laskys made a brief second crack at a 1-900 business, but by 1999 the tide was clearly against expensive psychic hotlines. Telephone companies had started their crackdown, and attorney general lawsuits were brewing. Besides, after the buyout PFN Holdings didn't have much capital, and doesn't seem to have done much in the way of marketing. It's obscure what happened in these years, but I think the Laskys licensed out the PFN name. psychicfriendsnetwork.com, from 2002 to around 2009, directed visitors to Keen. Keen was the Inphomation of the internet age, what Inphomation probably would have been if they had run their finances a little better in '97. Backed by $60 million in venture funding from names like Microsoft and eBay, Keen was a classic dotcom startup. They launched in '99 with the ambitious and original idea of operating a web directory and reference library. Like most of the seemingly endless number of reference website startups, they had to pivot to something else. Unlike most of the others, Keen and their investors had a relaxed set of moral strictures about the company's new direction. In the early 2000s, keen.com was squarely in the ethical swamp that had been so well explored by the 1-900 business. Their web directory specialized in phone sex and psychic advice---all offered by 1-800 numbers with convenient credit card payment, a new twist on the premium phone line model that bypassed the vagaries and regulations of telephone billing. Keen is, incidentally, still around today. They'll broker a call or chat with empath/medium Citrine Angel, offering both angel readings and clairsentience, just $1 for the first 5 minutes and $2.99 a minute thereafter. That's actually a pretty good deal compared to the Psychic Friends Network's old rates. Keen's parent company, Ingenio, runs a half dozen psychic advice websites and a habit tracking app. But it says something about the viability of online psychics that Keen still seems to do most of their business via phone. Maybe the internet is not as much of a blessing for psychics as it seems, or maybe they just haven't found quite the right business model. The Laskys enjoyed a windfall during PFN's 2000s dormancy. In 2004, the Inphomation bankruptcy estate settled its lawsuit against bankrupt MCI for withholding payments. The Laskys made $4 million. It's hard to say where that money went, maybe to backing Marc's Pikesville Pictures production company. Pikesville picked up odd jobs producing television commercials, promotional documentaries, and an extremely strange educational film intended to prepare children to testify in court. I only know about this because parts of it appear in the video "Marc Lasky Demo Reel," uploaded to YouTube by "Mike Warren," the old horse race handicapping pseudonym of Michael Lasky. It has 167 views, and a single comment, "my dad made this." That was Gabriela Lasky, Marc's daughter. It's funny how much of modern life plays out on YouTube, where Marc's own account uploaded the full run of PFN infomercials. Some of that $4 million in MCI money might have gone into the Psychic Friends Networks' reboot. In 2009, Marc Lasky produced a new series of television commercials for PFN. "The legendary Psychic Friends Network is back, bigger and bolder than ever." An extremely catchy jingle goes "all new, all improved, all knowing: call the Psychic Friends Network." On PFN 2.0, you can access your favorite psychic whenever you wish, on your laptop, computer, on your mobile, or on your tablet. These were decidedly modernized, directing viewers to text a keyword to an SMS shortcode or visit psychicfriendsnetwork.com, where they could set up real-time video consultations with PFN's network of advisors. Some referred to "newpfn.com" instead, perhaps because it was easier to type, or perhaps there was some dispute around the Keen deal. There were still echoes of the original 1990s formula. The younger Lasky seemed to be hunting for a new celebrity lead like Warwick, but having trouble finding one. Actress Vivica A. Fox appeared in one spot, but then sent a cease and desist and went to the press alleging that her likeness was used without her permission. Well, they got her to record the lines somehow, but maybe they never paid. Maybe she found out about PFN's troubled reputation after the shoot. In any case, Lasky went hunting again and landed on Puerto Rican astrologer and television personality Walter Mercado. Mercado, coming off something like Liberace if he was a Spanish-language TV host, sells the Psychic Friends Network to a Latin beat and does a hell of a job of it. He was a recognizable face in the Latin-American media due to his astrology show, syndicated for many years by Univision, and he appears in a sparkling outfit that lets him deliver the line "the legend is back" with far more credibility than anyone else in the new PFN spots. He was no Dionne Warwick, though, and the 2009 PFN revival sorely lacked the production quality or charm of the '90s infomercial. It seems to have had little impact; this iteration of PFN is so obscure that many histories of the company are completely unaware of it. Elsewhere, in Nevada, an enigmatic figure named Ya Tao Chang had incorporated Web Wizards Inc. I can tell you almost nothing about this; Chang is impossible to research and Web Wizards left no footprints. All I know is that, somehow, Web Wizards made it to a listing on the OTC securities market. In 2012, PFN Holdings needed money and, to be frank, I think that Chang needed a real business. Or, at least, something that looked like one. In a reverse-merger, PFN Holdings joined Web Wizards and renamed to Psychic Friends Network Inc., PFNI on the OTC bulletin board. The deal was financed by Right Power Services, a British Virgin Islands company (or was it a Singapore company? accounts disagree), also linked to Chang. Supposedly, there were millions in capital. Supposedly, exciting things were to come for PFN. Penny stocks are stocks that trade at low prices, under $5 or even more classically under $1. Because these prices are too low to quality for listing on exchanges, they trade on less formal, and less heavily regulated, over-the-counter markets. Related to penny stocks are microcap stocks, stocks of companies with very small market capitalizations. These companies, being small and obscure, typically see miniscule trading volumes as well. The low price, low volume, and thus high volatility of penny stocks makes them notoriously prone to manipulation. Fraud is rampant on OTC markets, and if you look up a few microcap names it's not hard to fall into a sort of alternate corporate universe. There exists what I call the "pseudocorporate world," an economy that relates to "real" business the same way that pseudoscience relates to science. Pseudocorporations have much of the ceremony of their legitimate brethren, but none of the substance. They have boards, executives, officers, they issue press releases, they publish annual reports. What they conspicuously lack is a product, or a business. Like NFTs or memecoins, they are purely tokens for speculation, and that speculation is mostly pumping and dumping. Penny stock pseudocompanies intentionally resemble real ones; indeed, their operation, to the extent that they have one, is to manufacture the appearance of operating. They announce new products, that will never materialize, they announce new partnerships, that will never amount to anything, they announce mergers, that never close. They also rearrange their executive leadership with impressive frequency, due in no small part to the tendency of those leaders to end up in trouble with the SEC. All of this means that it's very difficult to untangle their history, and often hard to tell if they were once real companies that were hollowed out and exploited by con men, or whether they were a sham all along. Web Wizards does not appear to have had any purpose prior to its merger with PFN, and as part of the merger deal the Laskys became the executive leadership of the new company. They seem to have legitimately approached the transaction as a way to raise capital for PFN, because immediately after the merger they announced PFN's ambitious future. This new PFN would be an all-online operation using live webcasts and 1:1 video calling. The PFN website became a landing page for their new membership service, and the Laskys were primed to produce a new series of TV spots. Little more would ever be heard of this. In 2014, PFN Inc renamed itself to "Peer to Peer Network Inc.," announcing their intent to capitalize on PFN's early gig work model by expanding the company into other "peer to peer" industries. The first and only venture Peer to Peer Network (PTOP on OTC Pink) announced was an acquisition of 321Lend, a silicon valley software startup that intended to match accredited investors with individuals needing loans. Neither company seems to have followed up on the announcement, and a year later 321Lend announced its acquisition by Loans4Less, so it doesn't seem that the deal went through. I might be reading too much between the lines, but I think there was a conflict between the Laskys, who had a fairly sincere intent to operate the PFN as a business, and the revolving odd lot of investors and executives that seem to grow like mold on publicly-traded microcap companies. Back in 2010, a stockbroker named Joshua Sodaitis started work on a transit payment and routing app called "Freemobicard." In 2023, he was profiled in Business Leaders Review, one of dozens of magazines, podcasts, YouTube channels, and Medium blogs that exist to provide microcap executives with uncritical interviews that create the resemblance of notability. The Review says Sodaitis "envisioned a future where seamless, affordable, and sustainable transportation would be accessible to all." Freemobicard, the article tells us, has "not only transformed the way people travel but has also contributed to easing traffic congestion and reducing carbon emissions." It never really says what Freemobicard actually is, but that doesn't matter, because by the time it gets involved in our story Sodaitis had completely forgotten about the transportation thing anyway. In 2015, disagreements between the psychic promoters and the stock promoters had come to a head. Attributing the move to differences in business vision, the Laskys bought the Psychic Friends Network assets out of Peer to Peer Network for $20,000 and resigned their seats on PTOP's board. At about the same time, PTOP announced a "licensing agreement" with a software company called Code2Action. The licensing agreement somehow involved Code2Action's CEO, Christopher Esposito, becoming CEO of PTOP itself. At this point Code2Action apparently rolled up operations, making the "licensing agreement" more of a merger, but the contract as filed with the SEC does indeed read as a license agreement. This is just one of the many odd and confusing details of PTOP's post-2015 corporate governance. I couldn't really tell you who Christopher Esposito is or where he came from, but he seems to have had something to do with Joshua Sodaitis, because he would eventually bring Sodaitis along as a board member. More conspicuously, Code2Action's product was called Mobicard---or Freemobicard, depending on which press release you read. This Mobicard was a very different one, though. Prior to the merger it was some sort of SMS marketing product (a "text this keyword to this shortcode" type of autoresponse/referral service), but as PTOP renamed itself to Mobicard Inc. (or at least announced the intent to, I don't think the renaming ever actually happened) the vision shifted to the lucrative world of digital business cards. Their mobile app, Mobicard 1.0, allowed business professionals to pay a monthly fee to hand out a link to a basic profile webpage with contact information and social media links. Kind of like Linktree, but with LinkedIn vibes, higher prices, and less polish. One of the things you'll notice about Mobicard is that, for a software company, they were pretty short on software engineers. Every version of the products (and they constantly announce new ones, with press releases touting Mobicard 1.5, 1.7, and 2.0) seems to have been contracted to a different low-end software house. There are demo videos of various iterations of Mobicard, and they are extremely underwhelming. I don't think it really mattered, PTOP didn't expect Mobicard to make money. Making money is not the point of a microcap pseudocompany. That same year, Code2Action signed another license agreement, much like the PTOP deal, but with a company called Cannabiz. Or maybe J M Farms Patient Group, the timeline is fuzzy. This was either a marketing company for medical marijuana growers or a medical marijuana grower proper, probably varying before and after they were denied a license by the state of Massachusetts on account of the criminal record of one of the founders. The whole cannabis aside only really matters because, first, it matches the classic microcap scam pattern of constantly pivoting to whatever is new and hot (which was, for a time, newly legalized cannabis), and second, because a court would later find that Cannabiz was a vehicle for securities fraud. Esposito had a few years of freedom first, though, to work on his new Peer to Peer Network venture. He made the best of it: PTOP issued a steady stream of press releases related to contracts for Mobicard development, the appointment of various new executives, and events as minor as having purchased a new domain name. Despite the steady stream of mentions in the venerable pages of PRNewswire, PTOP doesn't seem to have actually done anything. In 2015, 2016, 2017, and 2018, PTOP failed to complete financial audits and SEC reports. To be fair, in 2016 Esposito was fined nearly $100,000 by the SEC as part of a larger case against Cannabiz and its executives. He must have had a hard time getting to the business chores of PTOP, especially since he had been barred from stock promotion. In 2018, with PTOP on the verge of delisting due to the string of late audits, Joshua Sodaitis was promoted to CEO and Chairman of "Peer to Peer Network, Inc., (Stock Ticker Symbol PTOP) a.k.a. Mobicard," "the 1st and ONLY publicly traded digital business card company." PTOP's main objective became maintaining its public listing, and for a couple of years most discussion of the actual product stopped. In 2020, PTOP made the "50 Most Admired Companies" in something called "The Silicon Valley Review," which I assume is prestigious and conveniently offers a 10% discount if you nominate your company for one of their many respected awards right now. "This has been a monumental year for the company," Sodaitis said, announcing that they had been granted two (provisional) patents and appointed a new advisory board (including one member "who is self-identified as a progressive millennial" and another who was a retired doctor). The bio of Sodaitis mentions the Massachusetts medical marijuana venture, using the name of the company that was denied a license and shuttered by the SEC, not the reorganized replacement. Sodaitis is not great with details. It's hard to explain Mobicard because of this atmosphere of confusion. There was the complete change in product concept, which is itself confusing, since Sodaitis seems to have given the interview where he discussed Mobicard as a transportation app well after he had started describing it as a digital business card. Likewise, Mobicard has a remarkable number of distinct websites. freemobicard.com, mobicard.com, ptopnetwork.com, and mobicards.ca all seem oddly unaware of each other, and as the business plan continues to morph, are starting to disagree on what mobicard even is. The software contractor or staff developing the product keep changing, as does the version of mobicard they are about to launch. And on top of it all are the press releases. Oh, the press releases. There's nary a Silicon Valley grift unmentioned in PTOP's voluminous newswire output. Crypto, the Metaverse, and AI all make appearances as part of the digital business card vision. As for the tone, the headlines speak for themselves. "MOBICARD Set for Explosive Growth in 2024" "MobiCard's Digital Business Card Revolutionizes Networking & Social Media" "MOBICARD Revolutionizes Business Cards" "Peer To Peer Network, aka Mobicard™ Announces Effective Form C Filing with the SEC and Launch of Reg CF Crowdfunding Campaign" "Joshua Sodaitis, Mobicard, Inc. Chairman and CEO: 'We’re Highly Committed to Keeping Our 'One Source Networking Solution' Relevant to the Ever-Changing Dynamics of Personal and Professional Networking'" "PTOP ANNOUNCES THE RESUBMISSION OF THE IMPROVED MOBICARD MOBILE APPS TO THE APPLE STORE AND GOOGLE PLAY" "Mobicard™ Experienced 832% User Growth in Two Weeks" "Peer To Peer Network Makes Payment to Attorney To File A Provisional Patent for Innovative Technology" Yes, this company issues a press release when they pay an invoice. To be fair, considering the history of bankruptcy, maybe that's more of an achievement than it sounds. In one "interview" with a "business magazine," Sodaitis talks about why Mobicard has taken so long to reach maturity. It's the Apple app store review, he explains, a story to which numerous iOS devs will no doubt relate. Besides, based on their press releases, they have had to switch contractors and completely redevelop the product multiple times. I didn't know that the digital business card was such a technical challenge. Sodaitis has been working on it for perhaps as long as fifteen years and still hasn't quite gotten to MVP. You know where this goes, don't you? After decades of shady characters, trouble with regulators, cosplaying at business, and outright scams, there's only one way the story could possibly end. All the way back in 2017, PTOP announced that they were "Up 993.75% After Launch Of Their Mobicoin Cryptocurrency." PTOP, the release continues, "saw a truly Bitcoin-esque move today, completely outdoing the strength of every other stock trading on the OTC market." PTOPs incredible market move was, of course, from $0.0005 to $0.0094. With 22 billion shares of common stock outstanding, that gave PTOP a valuation of over $200 million my the timeless logic of the crypto investor. Of course, PTOP wasn't giving up on their OTC listing, and with declining Bitcoin prices their interest in the cryptocurrency seems to have declined as well. That was, until the political and crypto market winds shifted yet again. Late last year, PTOP was newly describing Mobicoin as a utility token. In November, they received a provisional patent on "A Cryptocurrency-Based Platform for Connecting Companies and Social Media Users for Targeted Marketing Campaigns." This is the latest version of Mobicard. As far as I can tell, it's now a platform where people are paid in cryptocurrency for tweeting advertising on behalf of a brand. PTOP had to beef up their crypto expertise for this exciting new frontier. Last year, they hired "Renowned Crypto Specialist DeFi Mark," proprietor of a cryptocurrency casino and proud owner of 32,000 Twitter followers. "With Peer To Peer Network, we're poised to unleash the power of blockchain, likely triggering a significant shift in the general understanding of web3," he said. "I have spoken to our Senior Architect Jay Wallace who is a genius at what he does and he knows that we plan to Launch Mobicard 1.7 with the MOBICOIN fully implemented shortly after the New President is sworn into office. I think this is a great time to reintroduce the world to MOBICOIN™ regardless of how I, or anyone feels about politics we can't deny the Crypto markets exceptional increase in anticipation to major regulatory transformations. I made it very clear to our Tech Team leader that this is a must to launch Mobicard™ 1.7. Well, they've outdone themselves. Just two weeks ago, they announced Mobicard 2.0. "With enhanced features like real-time analytics, seamless MOBICOIN™ integration, and enterprise-level onboarding for up to 999 million employees, this platform is positioned to set new standards in the digital business card industry." And how does that cryptocurrency integration work? "Look the Mobicard™ Reward system is simple. We had something like it previously implemented back in 2017. If a MOBICARD™ user shares his MOBICARD™ 50 times in one week then he will be rewarded with 50 MOBICOIN's. If a MOBICARD user attends a conference and shares his digital business card MOBICARD™ with 100 people he will be granted 100 MOBICOIN™'s." Yeah, it's best not to ask. I decided to try out this innovative new digital business card experience, although I regret to say that the version in the Play Store is only 1.5. I'm sure they're just waiting on app store review. The dashboard looks pretty good, although I had some difficulty actually using it. I have not so far been able to successfully create a digital business card, and most of the tabs just lead to errors, but I have gained access to four or five real estate brokers and CPAs via the "featured cards." One of the featured cards is for Christopher Esposito, listed as "Crypto Dev" at NRGai. Somewhere around 2019, Esposito brought Code2Action back to life again. He promoted a stock offering, talking up the company's bright future and many promising contracts. You might remember that this is exactly the kind of thing that the SEC got him for in 2016, and the SEC dutifully got him again. He was sentenced to five of probation after a court found that he had lied about a plan to merge Code2Action with another company and taken steps to conceal the mass sale of his own stock in the company. NRGai, or NRG4ai, they're inconsistent, is a token that claims to facilitate the use of idle GPUs for crypto training. According to one analytics website, it has four holders and trades at $0.00. The Laskys have moved on as well. Michael Lasky is now well into retirement, but Marc Lasky is President & Director of Fernhill Corporation, "a publicly traded Web3 Enterprise Software Infrastructure company focused on providing cloud based APIs and solutions for digital asset trading, NFT marketplaces, data aggregation and DeFi/Lending". Fernhill has four subsidiaries, ranging from a cryptocurrency market platform to mining software. None appear to have real products. Fernhill is trading on OTC Pink at $0.00045. Joshua Sodaitis is still working on Mobicard. Mobicard 2.0 is set for a June 1 launch date, and promises to "redefine digital networking and position [PTOP] as the premier solution in the digital business card industry." "With these exciting developments, we anticipate a positive impact on the price of PTOP stock." PTOP is trading on OTC Pink at $0.00015. Michael Lasky was reportedly fond of saying that "you can get more money from people over the telephone than using a gun." As it happens, he wielded a gun anyway, but he had a big personality like that. One wonders what he would say about the internet. At some point, in his golden years, he relaunched his handicapping business Mike Warren Sports. The website sold $97/month subscriptions for tips on the 2015 NFL and NCAA football seasons, and the customer testimonials are glowing. One of them is from CNN's Larry King, although it doesn't read much like a testimonial, more like an admission that he met Lasky once. There might still be some hope. A microcap investor, operating amusingly as "FOMO Inc.," has been agitating to force a corporate meeting for PTOP. PTOP apparently hasn't held one in years, is once again behind on audits, and isn't replying to shareholder inquiries. Investors allege poor management by Sodaitis. The demand letter, in a list of CC'd shareholders the author claims to represent by proxy, includes familiar names: Mike and Marc Lasky. They never fully divested themselves of their kind-of-sort-of former company. A 1998 article in the Baltimore Sun discussed Lasky's history as a handicapper. It quotes a former Inphomation employee, whose preacher father once wore a "Mike Warren Sports" sweater at the mall. "A woman came up to him and said 'Oh, I believe in him, Mike Warren.' My father says, 'well, ma'am, everybody has to believe in something." Lasky built his company on predicting the future, but of course, he was only ever playing the odds. Eventually, both turned on him. His company fell to a series of bad bets, and his scam fell to technological progress. Everyone has to believe in something, though, and when one con man stumbles there are always more ready to step in.

2 weeks ago 11 votes
2025-02-02 residential networking over telephone

Recently, I covered some of the history of Ethernet's tenuous relationship with installed telephone cabling. That article focused on the earlier and more business-oriented products, but many of you probably know that there have been a number of efforts to install IP networking over installed telephone wiring in a residential and SOHO environment. There is a broader category of "computer networking over things you already have in your house," and some products remain pretty popular today, although seemingly less so in the US than in Europe. The grandparent of these products is probably PhoneNet, a fairly popular product introduced by Farallon in the mid-'80s. At the time, local area networking for microcomputers was far from settled. Just about every vendor had their own proprietary solution, although many of them had shared heritage and resulting similarities. Apple Computer was struggling with the situation just like everyone; in 1983 they introduced an XNS-based network stack for the Lisa called AppleNet and then almost immediately gave up on it [1]. Steve Jobs made the call to adopt IBM's token ring instead, which would have seemed like a pretty safe bet at the time because of IBM's general prominence in the computing industry. Besides, Apple was enjoying a period of warming relations with IBM, part of the 1980s-1990s pattern of Apple and Microsoft alternately courting IBM as their gateway into business computing. The vision of token ring as the Apple network standard died the way a lot of token ring visions did, to the late delivery and high cost of IBM's design. While Apple was waiting around for token ring to materialize, they sort of stumbled into their own LAN suite, AppleTalk [2]. AppleTalk was basically an expansion of the unusually sophisticated peripheral interconnect used by the Macintosh to longer cable runs. Apple put a lot of software work into it, creating a pretty impressive zero-configuration experience that did a lot to popularize the idea of LANs outside of organizations large enough to have dedicated network administrators. The hardware was a little more, well, weird. In true Apple fashion, AppleTalk launched with a requirement for weird proprietary cables. To be fair, one of the reasons for the system's enduring popularity was its low cost compared to Ethernet or token ring. They weren't price gouging on the cables the way they might seem to today. Still, they were a decided inconvenience, especially when trying to connect machines across more than one room. One of the great things about AppleTalk, in this context, is that it was very slow. As a result, even though the physical layer was basically RS-422, the electrical requirements for the cabling were pretty relaxed. Apple had already taken advantage of this for cost reduction, using a shared signal ground on the long cables rather than the dedicated differential pairs typical for RS-422. A hobbyist realized that you could push this further, and designed a passive dongle that used telephone wiring as a replacement for Apple's more expensive dongle and cables. He filed a patent and sold it to Farallon, who introduced the product as PhoneNet. PhoneNet was a big hit. It was cheaper than Apple's solution for the same performance, and even better, because AppleTalk was already a bus topology it could be used directly over the existing parallel-wired telephone cabling in a typical house or small office. For a lot of people with heritage in the Apple tradition of computing, it'll be the first LAN they ever used. Larger offices even used it because of the popularity of Macs in certain industries and the simplicity of patching their existing telephone cables for AppleTalk use; in my teenage years I worked in an office suite in downtown Portland that hadn't seen a remodel for a while and still had telephone jacks labeled "PhoneNet" at the desks. PhoneNet had one important limitation compared to the network-over-telephone products that would follow: it could not coexist with telephony. Well, it could, in a sense, and was advertised as such. But PhoneNet signaled within the voice band, so it required dedicated telephone pairs. In a lot of installations, it could use the second telephone line that was often wired but not actually used. Still, it was a bust for a lot of residential installs where only one phone line was fully wired and already in use for phone calls. As we saw in the case of Ethernet, local area networking standards evolved very quickly in the '80s and '90s. IP over Ethernet became by far the dominant standard, so the attention of the industry shifted towards new physical media for Ethernet frames. While 10BASE-T Ethernet operated over category 3 telephone wiring, that was of little benefit in the residential market. Commercial buildings typically had "home run" telephone wiring, in which each office's telephone pair ran directly to a wiring closet. In residential wiring of the era, this method was almost unheard of, and most houses had their telephone jacks wired in parallel along a small number of linear segments (often just one). This created a cabling situation much like coaxial Ethernet, in which each telephone jack was a "drop" along a linear bus. The problem is that coaxial Ethernet relied on several different installation measures to make this linear bus design practical, and home telephone wiring had none of these advantages. Inconsistently spaced drops, side legs, and a lack of termination meant that reflections were a formidable problem. PhoneNet addressed reflections mainly by operating at a very low speed (allowing reflections to "clear out" between symbols), but such low bitrate did not befit the 1990s. A promising solution to the reflection problem came from a system called Tut Systems. Tut's history is unfortunately obscure, but they seem to have been involved in what we would now call "last-mile access technologies" since the 1980s. Tut would later be acquired by Motorola, but not before developing a number of telephone-wiring based IP networks under names like HomeWire and LongWire. A particular focus of Tut was multi-family housing, which will become important later. I'm not even sure when Tut introduced their residential networking product, but it seems like they filed a relevant patent in 1995, so let's say around then. Tut's solution relied on pulse position modulation (PPM), a technique in which data is encoded by the length of the spacing between pulses. The principal advantage of PPM is that it allows a fairly large number of bits to be transmitted per pulse (by using, say, 16 potential pulse positions to encode 4 bits). This allowed reflections to dissipate between pulses, even at relatively high bitrates. Following a bit of inter-corporate negotiation, the Tut solution became an industry standard under the HomePNA consortium: HomePNA 1.0. HomePNA 1.0 could transmit 1Mbps over residential telephone wiring with up to 25 devices. A few years later, HomePNA 1.0 was supplanted by HomePNA 2.0, which replaced PPM with QAM (a more common technique for high data rates over low bandwidth channels today) and in doing so improved to 10Mbps for potentially thousands of devices. I sort of questioned writing an article about all of these weird home networking media, because the end-user experience for most of them is pretty much the same. That makes it kind of boring to look at them one by one, as you'll see later. Fortunately, HomePNA has a property that makes it interesting: despite a lot of the marketing talking more about single-family homes, Tut seems to have envisioned HomePNA mainly as a last-mile solution for multi-family housing. That makes HomePNA a bit different than later offerings, landing in a bit of a gray area between the LAN and the access network. The idea is this: home run wiring is unusual in residential buildings, but in apartment and condo buildings, it is typical for the telephone lines of each unit to terminate in a wiring closet. This yields a sort of hybrid star topology where you have one line to each unit, and multiple jacks in each unit. HomePNA took advantage of this wiring model by offering a product category that is at once bland and rather unusual for this type of media: a hub. HomePNA hubs are readily available, even today in used form, with 16 or 24 HomePNA interfaces. The idea of a hub can be a little confusing for a shared-bus media like HomePNA, but each interface on these hubs is a completely independent HomePNA network. In an apartment building, you could connect one interface to the telephone line of each apartment, and thus offer high-speed (for the time) internet to each of your tenants using existing infrastructure. A 100Mbps Ethernet port on the hub then connected to whatever upstream access you had available. The use of the term "hub" is a little confusing, and I do believe that at least in the case of HomePNA 2.0, they were actually switching devices. This leads to some weird labeling like "hub/switch," perhaps a result of the underlying oddity of a multi-port device on a shared-media network that nonetheless performs no routing. There's another important trait of HomePNA 2.0 that we should discuss, at least an important one to the historical development of home networking. HomePNA 1.0 was designed not to cause problematic interference with telephone calls but still effectively signaled within the voice band. HomePNA 2.0's QAM modulation addressed this problem completely: it signaled between 4MHz and 10MHz, which put it comfortably above not only the voice band but the roughly up-to-1MHz band used by early ADSL. HomePNA could coexist with pretty much anything else that would have been used on a telephone line at the time. Over time, control of HomePNA shifted away from Tut Systems and towards a competitor called Epigram, who had developed the QAM modulation for HomePNA 2.0. Later part of Broadcom, Epigram also developed a 100Mbps HomePNA 3.0 in 2005. The wind was mostly gone from HomePNA's sails by that point, though, more due to the rise of WiFi than anything else. There was a HomePNA 3.1, which added support for operation over cable TV wiring, but shortly after, in 2009, the HomePNA consortium endorsed the HomeGrid Forum as a successor. A few years later, HomePNA merged into HomeGrid Forum and faded away entirely. The HomeGrid Forum is the organization behind G.hn, which is to some extent a successor of HomePNA, although it incorporates other precedents as well. G.hn is actually fairly widely used for the near-zero name recognition it enjoys, and I can't help but suspect that that's a result of the rather unergonomic names that ITU standards tend to take on. "G.hn" kind-of-sort-of stands for Gigabit Home Networking, which is at least more memorable than the formal designation G.9960, but still isn't at all distinctive. G.hn is a pretty interesting standard. It's quite sophisticated, using a complex and modern modulation scheme (OFDM) along with forward error correction. It is capable of up to 2Gbps in its recent versions, and is kind of hard to succinctly discuss because it supports four distinct physical media: telephone, coaxial (TV) cable, powerline, and fiber. G.hn's flexibility is probably another reason for its low brand recognition, because it looks very different in different applications. Distinct profiles of G.hn involve different band plans and signaling details for each physical media, and it's designed to coexist with other protocols like ADSL when needed. Unlike HomePNA, multi-family housing is not a major consideration in the design of G.hn and combining multiple networks with a "hub/switch" is unusual. There's a reason: G.hn wasn't designed by access network companies like Tut; it was mostly designed in the television set-top box (STB) industry. When G.hn hit the market in 2009, cable and satellite TV was rapidly modernizing. The TiVo had established DVRs as nearly the norm, and then pushed consumers further towards the convenience of multi-room DVR systems. Providing multi-room satellite TV is actually surprisingly complex, because STV STBs (say that five times fast) actually reconfigure the LNA in the antenna as part of tuning. STB manufacturers, dominated by EchoStar (at one time part of Hughes and closely linked to the Dish Network), had solved this problem by making multiple STBs in a home communicate with each other. Typically, there is a "main" STB that actually interacts with the antenna and decodes TV channels. Other STBs in the same house use the coaxial cabling to communicate with the main STB, requesting video signals for specific channels. Multi-room DVR was basically an extension of this same concept. One STB is the actual DVR, and other STBs remote-control it, scheduling recordings and then having the main STB play them back, transmitting the video feed over the in-home coaxial cabling. You can see that this is becoming a lot like HomePNA, repurposing CATV-style or STV-style coaxial cabling as a general-purpose network in which peer devices can communicate with each other. As STB services have become more sophisticated, "over the top" media services and "triple play" combo packages have become an important and lucrative part of the home communications market. Structurally, these services can feel a little clumsy, with an STB at the television and a cable modem with telephone adapters somewhere else. STBs increasingly rely on internet-based services, so you then connect the STB to your WiFi so it can communicate via the same cabling but a different modem. It's awkward. G.hn was developed to unify these communications devices, and that's mostly how it's used. Providers like AT&T U-verse build G.hn into their cable television devices so that they can all share a DOCSIS internet connection. There are two basic ways of employing G.hn: first, you can use it to unify devices. The DOCSIS modem for internet service is integrated into the STB, and then G.hn media adapters can provide Ethernet connections wherever there is an existing cable drop. Second, G.hn can also be applied to multi-family housing, by installing a central modem system in the wiring closet and connecting each unit via G.hn. Providers that have adopted G.hn often use both configurations depending on the customer, so you see a lot of STBs these days with G.hn interfaces and extremely flexible configurations that allow them to either act as the upstream internet connection for the G.hn network, or to use a G.hn network that provides internet access from somewhere else. The same STB can thus be installed in either a single-family home or a multi-family unit. We should take a brief aside here to mention MoCA, the Multimedia over Coax Alliance. MoCA is a somewhat older protocol with a lot of similarities to G.hn. It's used in similar ways, and to some extent the difference between the two just comes down to corporate alliances: AT&T is into G.hn, but Cox, both US satellite TV providers, and Verizon have adopted MoCA, making it overall the more common of the two. I just think it's less interesting. Verizon FiOS prominently uses MoCA to provide IP-based television service to STBs, via an optical network terminal that provides MoCA to the existing CATV wiring. We've looked at home networking over telephone wiring, and home networking over coaxial cable. What about the electrical wiring? G.hn has a powerline profile, although it doesn't seem to be that widely used. Home powerline networking is much more often associated with HomePlug. Well, as it happens, HomePlug is sort of dead, the industry organization behind it having wrapped up operations in 2016. That might not be such a big practical problem, though, as HomePlug is closely aligned with related IEEE standards for data over powerline and it's widely used in embedded applications. As a consumer product, HomePlug will be found in the form of HomePlug AV2. AV2 offers Gigabit-plus data rates over good quality home electrical wiring, and compared to G.hn and MoCA it enjoys the benefit that standalone, consumer adapters are very easy to buy. HomePlug selects the most complex modulation the wiring can support (typically QAM with a large constellation size) and uses multiple OFDM carriers in the HF band, which it transmits onto the neutral conductor of an outlet. The neutral wiring in the average house is also joined at one location in the service panel, so it provides a convenient shared bus. On the downside, the installation quality of home electrical wiring is variable and the neutral conductor can be noisy, so some people experience very poor performance from HomePlug. Others find it to be great. It really depends on the situation. That brings us to the modern age: G.hn, MoCA, and HomePlug are all more or less competing standards for data networking using existing household wiring. As a consumer, you're most likely to use G.hn or MoCA if you have an ISP that provides equipment using one of the two. Standalone consumer installations, for people who just want to get Ethernet from one place to another without running cable, usually use HomePlug. It doesn't really have to be that way, G.hn powerline adapters have come down in price to where they compete pretty directly with HomePlug. Coaxial-cable and telephone-cable based solutions actually don't seem to be that popular with consumers any more, so powerline is the dominant choice. I can take a guess at the reason: electrical wiring can be of questionable quality, but in a lot of houses I see the coaxial and telephone wiring is much worse. Some people have outright removed the telephone wiring from houses, and the coaxial plant has often been through enough rounds of cable and satellite TV installers that it's a bit of a project to sort out which parts are connected. A large number of cheap passive distribution taps, common in cable TV where the signal level from the provider is very high, can be problematic for coaxial G.hn or MoCA. It's usually not hard to fix those problems, but unless an installer from the ISP sorts it out it usually doesn't happen. For the consumer, powerline is what's most likely to work. And, well, I'm not sure that any consumers care any more. WiFi has gotten so fast that it often beats the data rates achievable by these solutions, and it's often more reliable to boot. HomePlug in particular has a frustrating habit of working perfectly except for when something happens, conditions degrade, the adapters switch modulations, and the connection drops entirely for a few seconds. That's particularly maddening behavior for gamers, who are probably the most likely to care about the potential advantages of these wired solutions over WiFi. I expect G.hn, MoCA, and HomePlug to stick around. All three have been written into various embedded standards and adopted by ISPs as part of their access network in multi-family or at least as an installation convenience in single-family contexts. But I don't think anyone really cares about them any more, and they'll start to feel as antiquated as HomePNA. And here's a quick postscript to show how these protocols might adapt to the modern era: remember how I said G.hn can operate over fiber? Cheap fiber, too, the kind of plastic cables used by S/PDIF. The HomePlug Forum is investigating the potential of G.hn over in-home passive optical networks, on the theory that these passive optical networks can be cheaper (due to small conductor size and EMI tolerance) and more flexible (due to the passive bus topology) than copper Ethernet. I wouldn't bet money on it, given the constant improvement of WiFi, but it's possible that G.hn will come back around for "fiber in the home" internet service. [1] XNS was a LAN suite designed by Xerox in the 1970s. Unusually for the time, it was an openly published standard, so a considerable number of the proprietary LANs of the 1980s were at least partially based on XNS. [2] The software sophistication of AppleTalk is all the more impressive when you consider that it was basically a rush job. Apple was set to launch LaserWriter, and as I mentioned recently on Mastodon, it was outrageously expensive. LaserWriter was built around the same print engine as the first LaserJet and still cost twice as much, due in good part to its flexible but very demanding PostScript engine. Apple realized it would never sell unless multiple Macintoshes could share it---it cost nearly as much as three Mac 128ks!---so they absolutely needed to have a LAN solution ready. LaserWriter would not wait for IBM to get their token ring shit together. This is a very common story of 1980s computer networks; it's hard to appreciate now how much printer sharing was one of the main motivations for networking computers at all. There's this old historical theory that hasn't held up very well but is appealing in its simplicity, that civilization arises primarily in response to the scarcity of water and thus the need to construct irrigation works. You could say that microcomputer networking arises primarily in response to the scarcity of printers.

a month ago 8 votes
2025-01-20 office of secure transportation

I've seen them at least twice on /r/whatisthisthing, a good couple dozen times on the road, and these days, even in press photos: GMC trucks with custom square boxes on the back, painted dark blue, with US Government "E" plates. These courier escorts, "unmarked" but about as subtle as a Crown Vic with a bull bar, are perhaps the most conspicuous part of an obscure office of a secretive agency. One that seems chronically underfunded but carries out a remarkable task: shipping nuclear weapons. The first nuclear weapon ever constructed, the Trinity Device, was transported over the road from Los Alamos to the north end of the White Sands Missile Range, near San Antonio, New Mexico. It was shipped disassembled, with the non-nuclear components strapped down in a box truck and the nuclear pit nestled in the back seat of a sedan. Army soldiers, of the Manhattan Engineering District, accompanied it for security. This was a singular operation, and the logistics were necessarily improvised. The end of the Second World War brought a brief reprieve in the nuclear weapons program, but only a brief one. By the 1950s, an arms race was underway. The civilian components of the Manhattan Project, reorganized as the Atomic Energy Commission, put manufacturing of nuclear arms into full swing. Most nuclear weapons of the late '40s, gravity bombs built for the Strategic Air Command, were assembled at former Manhattan Project laboratories. They were then "put away" at one of the three original nuclear weapons stockpiles: Manzano Base, Albuquerque; Killeen Base, Fort Hood; and and Clarksville Base, Fort Campbell [1]. By the mid-1950s, the Pantex Plant near Amarillo had been activated as a full-scale nuclear weapons manufacturing center. Weapons were stockpiled not only at the AEC's tunnel sites but at the "Q Areas" of about 20 Strategic Air Command bases throughout the country and overseas. Shipping and handling nuclear weapons was no longer a one-off operation, it was a national enterprise. To understand the considerations around nuclear transportation, it's important to know who controls nuclear weapons. In the early days of the nuclear program, all weapons were exclusively under civilian control. Even when stored on military installations (as nearly all were), the keys and combinations to the vaults were held by employees of the AEC, not military personnel. Civilian control was a key component of the Atomic Energy Act, an artifact of a political climate that disfavored the idea of fully empowering the military with such destructive weapons. Over the decades since, larger and larger parts of the nuclear arsenal have been transferred into military control. The majority of "ready to use" nuclear weapons today are "allocated" to the military, and the military is responsible for storing and transporting them. Even today, though, civilian control is very much in force for weapons in any state other than ready for use. Newly manufactured weapons (in eras in which there were such a thing), weapons on their way to and from refurbishment or modification, and weapons removed from the military allocation for eventual disassembly are all under the control of the Department of Energy's National Nuclear Security Administration [2]. So too are components of weapons, test assemblies, and the full spectrum of Special Nuclear Material (a category defined by the Atomic Energy Act). Just as in the 1940s, civilian employees of the DoE are responsible for securing and transporting a large inventory of weapons and sensitive assets. As the Atomic Energy Commission matured, and nuclear weapons became less of an experiment and more of a product, transportation arrangements matured as well. It's hard to find much historical detail on AEC shipping the 1960s, but we can pick up a few details from modern DoE publications showing how the process has improved. Weapons were transported in box trucks as part of a small convoy, accompanied by "technical couriers, special agents, and armed military police." Technical courier was an AEC job title, one that persisted for decades to describe the AEC staff who kept custody of weapons under transport. Despite the use of military security (references can be found to both Army MPs and Marines accompanying shipments), technical couriers were also armed. A late 1950s photo published by DoE depicts a civilian courier on the side of a road wielding a long suit jacket and an M3 submachine gun. During that period, shipments to overseas test sites were often made by military aircraft and Navy vessels. AEC couriers still kept custody of the device, and much of the route (for example, from Los Alamos to the Navy supply center at Oakland) was by AEC highway convoy. There have always been two key considerations in nuclear transportation: first, that an enemy force (first the Communists and later the Terrorists) might attempt to interdict such a shipment, and second, that nuclear weapons and materials are hazardous and any accident could create a disaster. More "broken arrow" incidents involve air transportation than anything else, and it seems that despite the potentially greater vulnerability to ambush, the ground has always been preferred for safety. A 1981 manual for military escort operations, applicable not only to nuclear but also chemical weapons, lays out some of the complexity of the task. "Suits Uncomfortable," "Radiation Lasts and Lasts," quick notes in the margin advise. The manual describes the broad responsibilities of escort teams, ranging from compliance with DOT hazmat regulations to making emergency repairs to contain leakage. It warns of the complexity of such operations near civilians: there may be thousands of civilians nearby, and they might panic. Escort personnel must be trained to be prepared for problems with the public. If they are not, their problems may be multiplied---perhaps to a point where satisfactory solutions become almost impossible. During the 1960s, heightened Cold War tensions and increasing concern of terrorism (likely owing to the increasingly prominent anti-war and anti-nuclear movements, sometimes as good as terrorists in the eyes of the military they opposed) lead to a complete rethinking of nuclear shipping. Details are scant, but the AEC seems to have increased the number of armed civilian guards and fully ended the use of any non-government couriers for special nuclear material. I can't say for sure, but this seems to be when the use of military escorts was largely abandoned in favor of a larger, better prepared AEC force. Increasing protests against nuclear weapons, which sometimes blocked the route of AEC convoys, may have made posse comitatus and political optics a problem with the use of the military on US roads. In 1975, the Atomic Energy Commission gave way to the Energy Research and Development Administration, predecessor to the modern Department of Energy. The ERDA reorganized huge parts of the nuclear weapons complex to align with a more conventional executive branch agency, and in doing so created the Office of Transportation Safeguards (OTS). OTS had two principal operations: the nuclear train, and nuclear trucks. Trains have been used to transport military ordnance for about as long as they have existed, and in the mid-20th century most major military installations had direct railroad access to their ammunition bunkers. When manufacturing operations began at the Pantex Plant, a train known as the "White Train" for its original color became the primary method of delivery of new weapons. The train was made up of distinctive armored cars surrounded by empty buffer cars (for collision safety) and modified box cars housing the armed escorts. Although the "white train" was repainted to make it less obvious, railfans demonstrate that it is hard to keep an unusual train secret, and anti-nuclear activists were often aware of its movements. While the train was considered a very safe and secure option for nuclear transportation (considering the very heavy armored cars and relative safety of established rail routes), it had its downsides. In 1985, a group of demonstrators assembled at Bangor Submarine Base. Among their goals was to bring attention to the Trident II SLBM by blocking the arrival of warheads on the White Train. 19 demonstrators were arrested and charged with conspiracy for their interference with the shipment. The jury found all 19 not guilty. The DoE is a little cagey, in their own histories, about why they stopped using the train. We can't say for sure that this demonstration was the reason, but it must have been a factor. At Bangor, despite the easy rail access, all subsequent shipments were made by truck. Trucks were far more flexible and less obvious, able to operate on unpredictable schedules and vary their routes to evade protests. In the two following years, use of the White Train trailed off and then ended entirely. From 1987, all land transportation of nuclear weapons would be by semi-trailer. This incident seems to have been formative for the OTS, which in classic defense fashion would be renamed the Office of Secure Transportation, or OST. A briefing on the OST, likely made for military and law enforcement partners, describes their tactical doctrine: "Remain Unpredictable." Sub-bullets of this concept include "Chess Match" and "Ruthless Adherence to Deductive Thought Process," the meaning of which we could ponder for hours, but if not a military briefing this is at least a paramilitary powerpoint. Such curious phrases accompanied by baffling concept diagrams (as we find them here) are part of a fine American tradition. Beginning somewhere around 1985, the backbone of the OST's security program became obscurity. An early '00s document from an anti-nuclear weapons group notes that there were only two known photographs of OST vehicles. At varying times in their recent history, OST's policy seems to have been to either not notify notify law enforcement of their presence at all, or to advise state police only that there was a "special operation" that they were not to interfere with. Box trucks marked "Atomic Energy Commission," or trains bearing the reporting symbol "AEC," are long gone. OST convoys are now unmarked and, at least by intention, stealthy. It must be because of this history that the OST is so little-known today. It's not exactly a secret, and there have been occasional waves of newspaper coverage for its entire existence. While the OST remains low-profile relative to, say, the national laboratories, over the last decade the DoE has rather opened up. There are multiple photos, and even a short video, published by the DoE depicting OST vehicles and personnel. The OST has had a hard time attracting and retaining staff, which is perhaps the biggest motivator of this new publicity: almost all of the information the DoE puts out to the public about OST is for recruiting. It is, of course, a long-running comedy that the federal government's efforts at low-profile vehicles so universally amount to large domestic trucks in dark colors with push bumpers, spotlights, and GSA license plates. OST convoys are not hard to recognize, and are conspicuous enough that with some patience you can find numerous examples of people with no idea what they are finding them odd enough to take photos. The OST, even as an acknowledged office of the NNSA with open job listings, still feels a bit like a conspiracy. During the early 1970s, the AEC charged engineers at Sandia with the design of a new, specialized vehicle for highway transportation of nuclear weapons. The result, with a name only the government could love, was the Safe Secure Transporter (SST, which is also often expanded as Safe Secure Trailer). Assembly and maintenance of the SSTs was contracted to Allied Signal, now part of Honeywell. During the 1990s, the SST was replaced by the Safeguards Transporter (SGT), also designed by Sandia. By M&A, the Allied Signal contract had passed to Honeywell Federal Manufacturing & Technology (FM&T), also the operating contractor of the Kansas City Plant where many non-nuclear components of nuclear weapons are made. Honeywell FM&T continues to service the SGTs today, and is building their Sandia-designed third-generation replacement, the Mobile Guardian [3]. Although DoE is no longer stingy about photographs of the SGT, details of its design remain closely held. The SGT consists of a silver semi-trailer, which looks mostly similar to any other van trailer but is a bit shorter than the typical 53' (probably because of its weight). Perhaps the most distinctive feature of the trailers is an underslung equipment enclosure which appears to contain an air conditioner; an unusual way to mount the equipment that I have never seen on another semi-trailer. Various DoE-released documents have given some interior details, although they're a bit confusing on close reading, probably because the trailers have been replaced and refurbished multiple times and things have changed. They are heavily armored, the doors apparently 12" thick. They are equipped with a surprising number of spray nozzles, providing fire suppression, some sort of active denial system (perhaps tear gas), and an expanding foam that can be released to secure the contents in an accident. There is some sort of advanced lock system that prevents the trailer being opened except at the destination, perhaps using age-old bank vault techniques like time delay or maybe drawing from Sandia's work on permissive action links and cryptographic authentication. The trailers are pulled by a Peterbilt tractor that looks normal until you pay attention. They are painted various colors, perhaps a lesson learned from the conspicuity of the White Train. They're visibly up-armored, with the windshield replaced by two flat ballistic glass panels, much like you'd see on a cash transport. The sleeper has been modified to fit additional equipment and expand seating capacity to four crew members. Maybe more obvious, they're probably the only semitrailers and tractors that you'll see with GSA "E" prefix license plates (for Department of Energy). SGTs are accompanied on the road by a number of escort vehicles, although I couldn't say exactly how many. From published photographs, we can see that these fall into two types: the dark blue, almost black GMC box trucks with not-so-subtle emergency lights and vans with fiberglass bodies that you might mistake for a Winnebago were they not conspicuously undecorated. I've also seen at least one photo of a larger Topkick box truck associated with the OST, as well as dark-painted conventional cargo vans with rooftop AC. If you will forgive the shilling for my Online Brand, I posted a collection of photos on Mastodon. These were all released by NNSA and were presumably taken by OST or Honeywell staff, you can see that many of them are probably from the same photoshoot. Depending on what part of the country you are in, you may very well be able to pick these vehicles out on the freeway. Hint: they don't go faster than 60, and only operate during the day in good weather. These escort vehicles probably mostly carry additional guards, but one can assume that they also have communications equipment and emergency supplies. Besides security, one of the roles of the OST personnel is prompt emergency response, taking the first steps to contain any kind of radiological release before larger response forces can arrive. Documents indicate that OST has partnerships with both DoE facilities (such as national labs) and the Air Force to provide a rapid response capability and offer secure stopping points for OST convoys. There is, perhaps, a reason for the OST's low profile besides security and anti-nuclear controversy: classic government controversy. The OST is sort of infamously not in great shape. Some of the vehicles were originally fabricated in Albuquerque in a motley assortment of leased buildings put together temporarily for the task, others were fabricated at the Kansas City Plant. It's hard to tell which is which, but when refurbishment of the trailers was initiated in the 2000s, it was decided to centralize all vehicle work near the OST's headquarters (also a leased office building) in Albuquerque. At the time, the OST's warehouses and workshops were in poor and declining condition, and deemed too small for the task. OST's communications center (discussed in more detail later) was in former WWII Sandia Base barracks along with NNSA's other Albuquerque offices, and they were in markedly bad shape. To ready Honeywell FM&T for a large refurbishment project and equip OST with more reliable, futureproof facilities, it was proposed to build the Albuquerque Transportation Technology Center (ATTC) near the Sunport. In 2009, the ATTC was canceled. To this day, Honeywell FM&T works out of various industrial park suites it has leased, mostly the same ones as the 1980s. Facilities plans released by the DoE in response to a lawsuit by an activist organization end in FY2014 but tell a sad story of escalating deferred maintenance, buildings in unknown condition because of the lack of resources to inspect them, and an aging vehicle fleet that was becoming less reliable and more expensive to maintain. The OST has 42 trucks and about 700 guards, now styled as Federal Agents. They are mostly recruited from military special forces, receive extensive training, and hold limited law enforcement powers and a statutory authorization to use deadly force in the defense of their convoys. Under a little-known and (fortunately) little-used provision of the Atomic Energy Act, they can declare National Security Areas, sort of a limited form of martial law. Despite these expansive powers, a 2015 audit report from the DoE found that OST federal agents were unsustainably overworked (with some averaging nearly 20 hours of overtime per week), were involved in an unacceptable number of drug and alcohol-related incidents for members of the Human Reliability Program, and that a series of oversights and poor management had lead to OST leadership taking five months to find out that an OST Federal Agent had threatened to kill two of his coworkers. Recruiting and retention of OST staff is poor, and this all comes in the context of an increasing number of nuclear shipments due to the ongoing weapons modernization program. The OST keeps a low profile perhaps, in part, because it is troubled. Few audit reports, GSA evaluations, or even planning documents have been released to the public since 2015. While this leaves the possibility that the situation has markedly improved, refusal to talk about it doesn't tend to indicate good news. OST is a large organization for its low profile. It operates out of three command centers: Western Command, at Kirtland AFB, Central Command, in Texas at Pantex, and Eastern Command, at Savannah River. The OST headquarters is leased space in an office building near the Sunport, and the communications and control center is in the new NNSA building on Eubank. Agent training takes place primarily on a tenant basis at a National Guard base in Arkansas. OST additionally operates four or five (it was five but I believe one has been decommissioned) communications facilities. I have not been successful in locating those exactly besides that they are in New Mexico, Idaho, Missouri, South Carolina, and Maryland. Descriptions of these facilities are consistent with HF radio sites. That brings us to the topic of communications, which you know I could go on about at length. I have been interested in OST for a long time, and a while back I wrote about the TacNet Tracker, an interesting experiment in early mobile computing and mesh networking that Sandia developed as a tactical communications system for OST. OST used to use a proprietary, Sandia-developed digital HF radio system for communications between convoys and the control center. That was replaced by ALE, for commonality with military systems, sometime in the 1990s. More recent documents show that OST continues to use HF radio via the five relay stations, but also uses satellite messaging (which is described as Qualcomm, suggesting the off-the-shelf commercial system that is broadly popular in the trucking industry). Things have no doubt continued to advance since that dated briefing, as more recent documents mention real-time video links and extensive digital communications. The OST has assets beyond trucks, although the trucks are the backbone of the system. Three 737s, registered in the NNSA name, make up their most important air assets. Released documents don't rule out the possibility of these aircraft being used to transport nuclear weapons, but suggest that they're primarily for logistical support and personnel transport. Other smaller aircraft are in the OST inventory as well, all operating from a hanger at the Albuquerque Sunport. They fly fairly often, perhaps providing air support to OST convoys, but the NNSA indicates that they also use the OST aircraft for other related NNSA functions like transportation of the Radiological Assistance Program teams. It should be said that despite the OST's long-running funding and administrative problems, it has maintained an excellent safety record. Many sources state that there was only been one road accident involving an OST convoy, in which the truck slid off the road during an ice storm in Nebraska. I have actually seen OST documents refer to another incident in Oregon in the early '80s, in which an escort vehicle was forced off the road by a drunk driver and went into the ditch. I think it goes mostly unmentioned since only an escort vehicle was involved and there was no press attention at the time. Otherwise, despite troubling indications of its future sustainability, OST seems to have kept an excellent track record. Finally, if you have fifteen minutes to kill, this video is probably the most extensive source of information on OST operations to have been made public. Even though I'm pretty sure a couple of the historical details it gives are wrong, but what's new. Special credit if you notice the lady that's still wearing her site-specific Q badge in the video. Badges off! Badges! Also, if you're former military and can hold down a Q, a CDL, EMT-B, and firearms qualifications, they're hiring. I hear the overtime is good. But maybe the threats of violence not so much. [1] The early Cold War was a very dynamic time in nuclear history, and plans changed quickly as the AEC and Armed Forces Special Weapons Project developed their first real nuclear strategy. Many of these historic details are thus complicated and I am somewhat simplifying. There were other stockpile sites planned that underwent some construction, and it is not totally clear if they were used before strategies changed once again. Similarly, manufacturing operations moved around quite a bit during this era and are hard to summarize. [2] The NNSA, not to be confused with the agency with only one N, is a semi-autonomous division of the Department of Energy with programmatic responsibility for nuclear weapons and nuclear security. Its Administrator, currently former Sandia director Jill Hruby, is an Under Secretary of Energy and answers to the Secretary of Energy (and then to the President). I am personally very fond of Jill Hruby because of memorable comments she made after Trump's first election. They were not exactly complimentary to the new administration and I have a hard time thinking her outspokenness was not a factor in her removal as director of the laboratory. I assume her tenure as NNSA Administrator is about to come to an end. [3] Here's a brief anecdote about how researching these topics can drive you a little mad. Unclassified documents about OST and their vehicles make frequent reference to the "Craddock buildings," where they are maintained and overhauled in Albuquerque. For years, this lead me to assume that Craddock was the name of a defense contractor that originally held the contract and Honeywell had acquired. There is, to boot, an office building near OST headquarters in Albuquerque that has a distinctive logo and the name "Craddock" in relief, although it's been painted over to match the rest of the building. Only yesterday did I look into this specifically and discover that Craddock is a Colorado-based commercial real estate firm that developed the industrial park near the airport, where MITS manufactured the Altair 8800 and Allied Signal manufactured the SSTs (if I am not mistaken Honeywell FM&T now uses the old MITS suite!). OST just calls them the Craddock buildings because Craddock is the landlord. Craddock went bankrupt in the '80s, sold off part of its Albuquerque holdings, and mostly withdrew to Colorado, probably why they're not a well-known name here today.

a month ago 28 votes
2025-01-05 pairs not taken

So we all know about twisted-pair ethernet, huh? I get a little frustrated with a lot of histories of the topic, like the recent neil breen^w^wserial port video, because they often fail to address some obvious questions about the origin of twisted-pair network cabling. Well, I will fail to answer these as well, because the reality is that these answers have proven very difficult to track down. For example, I have discussed before that TIA-568A and B are specified for compatibility with two different multipair wiring conventions, telephone and SYSTIMAX. And yet both standards actually originate within AT&T, so why did AT&T disagree internally on the correspondence of pair numbers to pair colors? Well, it's quite likely that some of these things just don't have satisfactory answers. Maybe the SYSTIMAX people just didn't realize there was an existing convention until they were committed. Maybe they had some specific reason to assign pairs 3 and 4 differently that didn't survive to the modern era. Who knows? At this point, the answer may be no one. There are other oddities to which I can provide a more satisfactory answer. For example, why is it so widely said that twisted-pair ethernet was selected for compatibility with existing telephone cabling, when its most common form (10/100) is in fact not compatible with existing telephone cabling? But before we get there, let's address one other question that the Serial Port video has left with a lot of people. Most office buildings, it is mentioned, had 25-pair wiring installed to each office. Wow, that's a lot of pairs! A telephone line, of course, uses a single pair. UTP ethernet would be designed to use two. Why 25? The answer lies in the key telephone system. The 1A2 key telephone system, and its predecessors and successors, was an extremely common telephone system in the offices of the 1980s. Much of the existing communications wiring of the era's commercial buildings had been installed specifically for a 1A2-like system. I have previously explained that key telephone systems, for simplicity of implementation, inverted the architecture we expect from the PBX by connecting many lines to each phone, instead of many phones to each line. This is the first reason: a typical six-button key telephone, with access to five lines plus hold, needed five pairs to deliver those five lines. An eighteen button call director would have, when fully equipped, 17 lines requiring 17 pairs. Already, you will see that we can get to some pretty substantial pair counts. On top of that, though, 1A2 telephones provided features like hold, busy line indication (a line key lighting up to indicate its status), and selective ringing. Later business telephone systems would use a digital connection to control these aspects of the phone, but the 1A2 is completely analog. It uses more pairs. There is an A-lead pair, which controls hold release. There is a lamp pair for each line button, to control the light. There is a pair to control the phone's ringer, and in some installations, another pair to control a buzzer (used to differentiate outside calls from calls on an intercom line). So, a fairly simple desk phone could require eight or more pairs. With the popularity of the 1A2 system, the industry converged on a standard for business telephone wiring: 25-pair cables terminated in Amphenol connectors. A call director could still require two cables, and two Amphenol connectors, and you can imagine how bulky this connection was. 25-pair cable was fairly expensive. These issues all motivated the development of digitally-controlled systems like the Merlin, but as businesses looked to install computer networks, 25-pair cabling was what most of them already had in place. But, there is a key difference between the unshielded twisted-pair cables used for telephones and the unshielded twisted-pair we think of today: the twist rate. We mostly interact with this property through the proxy of "cable categories," which seem to have originated with cable distributors (perhaps Anixter) but were later standardized by TIA-568. Category 1: up to 1MHz (not included in TIA-568) Category 2: up to 4MHz (not included in TIA-568) Category 3: up to 16MHz Category 4: up to 20MHz (not included in TIA-568) Category 5: up to 100MHz Category 6: up to 250MHz Category 7: up to 600MHz (not included in TIA-568) Category 8: up to 2GHz Some of these categories are not, in fact, unshielded twisted-pair (UTP), as shielding is required to achieve the specified bandwidth. The important thing about these cable categories is that they sort of abstract away the physical details of the cable's construction, by basing the definition around a maximum usable bandwidth. That bandwidth is, of course, defined in terms of attenuation and crosstalk parameters that differ between categories. Among the factors that determine the bandwidth capability of a cable is the twist rate, the frequency with which the two wires in a pair switch positions. The idea of twisted pair is very old, dating to the turn of the 20th century and open wire telephone leads that used "transposition brackets" to switch the order of the wires on the telephone pole. More frequent twisting provides protection against crosstalk at higher frequencies, due to the shorter spans of unbalanced wire. As carrier systems used higher frequencies on open wire telephone leads, transposition brackets became more frequent. Telephone cable is much the same, with the frequency of twists referred to as the pitch. The pitch is not actually specified by category standards; cables use whatever pitch is sufficient to meet the performance requirements. In practice, it's also typical to use slightly different pitches for different pairs in a cable, to avoid different pairs "interlocking" with each other and inviting other forms of EM coupling. Inside telephone wiring in residential buildings is often completely unrated and may be more or less equivalent to category 1, which is a somewhat informal standard sufficient only for analog voice applications. Of course, commercial buildings were also using their twisted-pair cabling only for analog voice, but the higher number of pairs in a cable and the nature of key systems made crosstalk a more noticeable problem. As a result, category 3 was the most common cable type in 1A2-type installations of the 1980s. This is why category 3 was the first to make it into the standard, and it's why category 3 was the standard physical medium for 10BASE-T. In common parlance, wiring originally installed for voice applications was referred to as "voice grade." This paralleled terminology used within AT&T for services like leased lines. In inside wiring applications, "voice grade" was mostly synonymous with category 3. Indeed, StarLAN, the main predecessor to 10BASE-T, required a bandwidth of 12MHz... beyond the reliable capabilities of category 1 and 2, but perfectly suited for category 3. This brings us to our second part of the twisted-pair story that is frequently elided in histories: the transition from category 3 cabling to category 5 cabling, as is required by the 100BASE-TX "10/100" ethernet. On the one hand, the explanation is simple. 100BASE-TX requires a 100MHz cable, which means it requires category 5. Case closed. On the other hand, remember the whole entire thing about twisted-pair being intended to reuse existing telephone cable? Yes, the move from 10BASE-T to 100BASE-TX, and from category 3 to category 5, was not an entirely straightforward one. The desire to reuse existing telephone cabling was still very much alive, and several divergent versions of twisted-pair ethernet were created for this purpose. Ethernet comes with these kind of odd old conventions for describing physical carriers. The first part is the speed, the second part is the bandwidth/position (mostly obsolete, with BASE for baseband being the only surviving example), and the next part, often after a hyphen, identifies the medium. This medium code was poorly standardized and can be a little confusing. Most probably know that 10BASE5 and 10BASE2 identify 10Mbps Ethernet over two different types of coaxial cable. Perhaps fewer know that StarLAN, over twisted pair, was initially described as 1BASE5 (it was, originally, 1Mbps). The reason for the initial "5" code for twisted pair seems to be lost to history; by the time Ethernet over twisted pair was accepted as part of the IEEE 802.3 standard, the medium designator had changed to "-T" for Twisted Pair: 10BASE-T. And yet, 100Mbps "Fast Ethernet," while often referred to as 100BASE-T, is more properly 100BASE-TX. Why? To differentiate it from the competing standard 100BASE-T4, which was 100Mbps Ethernet over Category 3 twisted pair cable. There were substantial efforts to deploy Fast Ethernet without requiring the installation of new cable in existing buildings, and 100BASE-TX competed directly with both 100BASE-T4 and the somewhat eccentrically designated 100BaseVG. In 1995, all three of these media were set up for a three-way faceoff [1]. For our first contender, let's consider 100BASE-T4, which I'll call "T4" for short. The T4 media designator means Twisted pair, 4 pairs. Recall that, for various reasons, 10BASET only used two pairs (one each direction). Doubling the number of required pairs might seem like a bit of a demand, but 10BASET was already routinely used with four-pair cable and 8P8C connectors, and years later Gigabit 1000BASE-T would do the same. Using these four pairs, T4 could operate over category 3 cable at up to 100 meters. T4 used the pairs in an unusual way, directly extending the 10BASET pattern while compromising to achieve the high data rate over lower bandwidth cable. T4 had one pair each direction, and two pairs that dynamically changed directions as required. Yes, this means that 100BASE-T4 was only half duplex, a limitation that was standard for coaxial Ethernet but not typical for twisted pair. T4 was mostly a Broadcom project, who offered chipsets for the standard and brought 3Com on board as the principal (but not only) vendor of network hubs. The other category 3 contender, actually a slightly older one, was Hewlett-Packard's 100BaseVG. The "VG" media designator stood for "voice grade," indicating suitability for category 3 cables. Like T4, VG required four pairs. VG also uses those pairs in an unusual way, but a more interesting one: VG switches between a full-duplex, symmetric "control mode" and a half-duplex "transmission mode" in which all four pairs are used in one direction. Coordinating these transitions required a more complex physical layer protocol, and besides, HP took the opportunity to take on the problem of collisions. In 10BASE-T networks, the use of hubs meant that multiple hosts were in a collision domain, much like with coaxial Ethernet. As network demands increased, collisions became more frequent and the need to retransmit after collisions could appreciably reduce the effective capacity of the network. VG solved both problems at once by introducing, to Ethernet, one of the other great ideas of the local area networking industry: token-passing. The 100BaseVG physical layer incorporated a token-passing scheme in which the hub assigned tokens to nodes, both setting the network operation mode and preventing collisions. The standard even included a simple quality of service scheme to the tokens, called demand priority, in which nodes could indicate a priority level when requesting to transmit. The token-passing system made the effective throughput of heavily loaded VG networks appreciably higher than other Fast Ethernet networks. Demand priority promised to make VG more suitable for real-time media applications in which Ethernet had traditionally struggled due to its nondeterministic capacity allocation. Given that you have probably never heard of either of these standards, you are probably suspecting that they did not achieve widespread success. Indeed, the era of competition was quite short, and very few products were ever offered in either T4 or VG. Considering the enormous advantage of using existing Category 3 cabling, that's kind of a surprise, and it undermines the whole story that twisted pair ethernet succeeded because it eliminated the need to install new cabling. Of course, it doesn't make it wrong, exactly. Things had changed: 10BASET was standardized in 1990, and the three 100Mbps media were adopted in 1994-1995. Years had passed, and the market had changed. Besides, despite their advantages, T4 and VG were not without downsides. To start, both were half-duplex. I don't think this was actually that big of a limitation at the time; half-duplex 100Mbps was still a huge improvement in real performance over full-duplex 10Mbps in all but the most pathological cases. A period document from a network equipment vendor notes this limitation of T4 but then describes full-duplex as "unneeded for workstations." That might seem like an odd claim today, but I think it was a pretty fair one in the mid-'90s. A bigger problem was that both T4 and VG were meaningfully more complicated than TX. T4 used a complex and expensive DSP chip to recover the complex symbols from the lower-grade cable. VG's token passing scheme required a more elaborate physical layer protocol implementation. Both standards were meaningfully more expensive, both for adapters and network appliances. The cost benefit of using existing cabling was thus a little fuzzier: buyers would have to trade off the cost of new cabling vs. the savings of using less complex, less expensive TX equipment. For similar reasons, TX is also often said to have been more reliable than T4 or VG, although it's hard to tell if that's a bona fide advantage of TX or just a result of TX's much more widespread adoption. TX transceivers benefited from generations of improvement that T4 and VG transceivers never would. Let's think a bit about that tradeoff between new cable and more expensive equipment. T4 and VG both operated on category 3, but they required four pairs. In buildings that had adopted 10BASE-T on existing telephone wiring, they would most likely have only punched down two pairs (out of a larger cable) to their network jacks and equipment. That meant that an upgrade from 10BASE-T to 100BASE-T4, for example, still involved considerable effort by a telecom or network technician. There would often be enough spare pairs to add two more to each network device, but not always. In practice, upgrading an office building would still require the occasional new cable pull. T4 and VGs poor reputation for reliability, or moreover poor reputation for tolerating less-than-perfect installations, meant that even existing connections might need time-consuming troubleshooting to bring them up to full category 3 spec (while TX, by spec, requires the full 100MHz of category 5, it is fairly tolerant of underperforming cabling). There's another consideration as well: the full-duplex nature of TX makes it a lot more appealing in the equipment room and data center requirement, and for trunk connections (between hubs or switches). These network connections see much higher utilization, and often more symmetric utilization as well, so a full-duplex option really looks 50% faster than a half-duplex one. Historically, plenty of network architectures have included the use of different media for "end-user" vs trunk connections. Virtually all consumer and SMB internet service providers do so today. It has never really caught on in the LAN environment, where a smaller staff of network technicians are expected to maintain both sides. Put yourself in the shoes of an IT manager at a midsized business. One option is T4 or VG, with more expensive equipment and some refitting of the cable plant, and probably with TX used in some cases anyway. Another option is TX, with less expensive equipment and more refitting of the cable plant. You can see that the decision is less than obvious, and you could easily be swayed in the all-TX direction, especially considering the benefit of more standardization and fewer architectural and software differences from 10BASE-T. That seems to be what happened. T4 and VG found little adoption, and as inertia built, the cost and vendor diversity advantage of TX only got bigger. Besides, a widespread industry shift from shared-media networks (with hubs) to switched networks (with, well, switches) followed pretty closely behind 100BASE-TX. A lot of users went straight from 10BASE-T to switched 100BASE-TX, which almost totally eliminated the benefits of VG's token-passing scheme and made the cost advantage of TX even bigger. And that's the story, right? No, hold on, we need to talk about one other effort to upon 10BASE-T. Not because it's important, or influential, or anything, but because it's very weird. We need to talk about IsoEtherent and IsoNetworks. As I noted, Ethernet is poorly suited to real-time media applications. That was true in 1990, and it's still true today, but network connections have gotten so fast that the level of performance overhead available mitigates the problem. Still, there's a fundamental limitation: real-time media, like video and audio, requires a consistent amount of delivered bandwidth for the duration of playback. The Ethernet/IP network stack, for a couple of different reasons, provides only opportunistic or nondeterministic bandwidth to any given application. As a result, achieving smooth playback requires some combination of overprovisioning of the network and buffering of the media. This buffering introduces latency, which is particularly intolerable in real-time applications. You might think this problem has gone away entirely with today's very fast networks, but you can still see Twitch streamers struggling with just how bad the internet is at real-time media. An alternative approach comes from the telephone industry, which has always had real-time media as its primary concern. The family of digital network technologies developed in the telephone industry, SONET, ISDN, what have you, provide provisioned bandwidth via virtual circuit switching. If you are going to make a telephone call at 64Kbps, the network assigns an end-to-end, deterministic 64Kbps connection. Because this bandwidth allocation is so consistent and reliable, very little or no buffering is required, allowing for much lower latency. There are ways to address this problem, but they're far from perfect. The IP-based voice networks used by modern cellular carriers make extensive use of quality of service protocols but still fail to deliver the latency of the traditional TDM telephone network. Even with QoS, VoIP struggles to reach the reliability of ISDN. For practical reasons, consumers are rarely able to take any advantage of QoS for ubiquitous over-the-top media applications like streaming video. What if things were different? What if, instead of networks, we had IsoNetworks? IsoEthernet proposed a new type of hybrid network that was capable of both nondeterministic packet switching and deterministic (or, in telephone industry parlance, isochronous) virtual circuit switching. They took 10BASE-T and ISDN and ziptied them together, and then they put Iso in front of the name of everything. Here's how it works: IsoEthernet takes two pairs of category 3 cabling and runs 16.144 Mbps TDM frames over them at full duplex. This modest 60% increase in overall speed allows for a 10Mbps channel (called a P-channel by IsoEthernet) to be used to carry Ethernet frames, and the remaining 6.144Mbps to be used for 96 64-Kbps B-channels according to the traditional ISDN T2 scheme. An IsoEthernet host (sadly not called an IsoHost, at least not in any documents I've seen) can use both channels simultaneously to communicate with an IsoHub. An IsoHub functions as a standard Ethernet hub for the P-channel, but directs the B-channels to a TDM switching system like a PABX. The mention of a PABX, of course, illustrates the most likely application: telephone calls over the computer. I know that doesn't side like that much of a sell: most people just had a computer on their desk, and a phone on their desk, and despite decades of effort by the Unified Communications industry, few have felt a particular need to marry the two devices. But the 1990s saw the birth of telepresence: video conferencing. We're doing Zoom, now! Videoconferencing over IP over 10Mbps Ethernet with multiple hosts in a collision domain was a very, very ugly thing. Media streaming very quickly caused almost worst-case collision behavior, dropping the real capacity of the medium well below 10Mbps and making even low resolution video infeasible. Telephone protocols were far more suited to videoconferencing, and so naturally, most early videoconferencing equipment operated over ISDN. I had a Tandberg videoconferencing system, for example, which dated to the mid '00s. It still provided four jacks on the back suitable for 4x T1 connections or 4 ISDN PRIs (basically just a software difference), providing a total of around 6Mbps of provisioned bandwidth for silky smooth real-time video. These were widely used in academia and large corporations. If you ever worked somewhere with a Tandberg or Cisco (Cisco bought Tandberg) curved-monitor-wall system, it was most likely running over ISDN using H.320 video and T.120 application sharing ("application sharing" referred to things like virtual whiteboards). Early computer-based videoconferencing systems like Microsoft NetMeeting were designed to use existing computer networks. They used the same protocols, but over IP, with a resulting loss in reliability and increase in latency [2]. With IsoEthernet, there was no need for this compromise. You could use IP for your non-realtime computer applications, but your softphone and videoconferencing client could use ISDN. What a beautiful vision! As you can imagine, it went nowhere. Despite IEEE acceptance as 802.9 and promotion efforts by developer National Semiconductor, IsoEthernet never got even as far as 100BASE-T4 or 100BaseVG. I can't tell you for sure that it ever had a single customer outside of evaluation environments. [1] A similar 100Mbps-over-category 3 standard, called 100BASE-T2, also belongs to this series. I am omitting it from this article because it was standardized in 1998 after industry consolidation on 100BASE-TX, so it wasn't really part of the original competition. [2] The more prominent WebEx has a stranger history which will probably fill a whole article here one day---but it did also use H.320.

a month ago 38 votes

More in technology

Apple’s new iPads are here, let’s break them down

Another day, another opportunity to rate my 2025 Apple predictions! iPad Here’s what I predicted would happen with the base iPad this year: I fully expect to see the 11th gen iPad in 2025, and I think it will come with a jump to the A17 Pro or

4 hours ago 2 votes
A lightweight file server running entirely on an Arduino Nano ESP32

Home file servers can be very useful for people who work across multiple devices and want easy access to their documents. And there are a lot of DIY build guides out there. But most of them are full-fledged NAS (network-attached storage) devices and they tend to rely on single-board computers. Those take a long time […] The post A lightweight file server running entirely on an Arduino Nano ESP32 appeared first on Arduino Blog.

7 hours ago 2 votes
The New Leverage: AI and the Power of Small Teams

This weekend, a small team in Latvia won an Oscar for a film they made using free software. That’s not just cool — it’s a sign of what’s coming. Sunday night was family movie night in my home. We picked a recent movie, FLOW. I’d heard good things about it and thought we’d enjoy it. What we didn’t know was that as we watched, the film won this year’s Academy Award as best animated feature. Afterwards, I saw this post from the movie’s director, Gints Zilbalodis: We established Dream Well Studio in Latvia for Flow. This room is the whole studio. Usually about 4-5 people were working at the same time including me. I was quite anxious about being in charge of a team, never having worked in any other studios before, but it worked out. pic.twitter.com/g39D6YxVWa — Gints Zilbalodis (@gintszilbalodis) January 26, 2025 Let that sink in: 4-5 people in a small room in Latvia led by a relatively inexperienced director used free software to make a movie that as of February 2025 had earned $20m and won an Oscar. I know it’s a bit more involved than that, but still – quite an accomplishment! But not unique. Billie Eilish and her brother Phineas produced her Grammy-winning debut album When We All Fall Asleep, Where Do We Go? in their home studio. And it’s not just cultural works such as movies and albums: small teams have built hugely successful products such as WhatsApp and Instagram. As computers and software get cheaper and more powerful, people can do more with less. And “more” here doesn’t mean just a bit better (pardon the pun) – it means among the world’s best. And as services and products continue migrating from the world of atoms to the world of bits, creators’ scope of action grows. This trend isn’t new. But with AI in the mix, things are about to go into overdrive. Zilbalodis and his collaborators could produce their film because someone else built Blender; they worked within its capabilities and constraints. But what if their vision exceeded what the software can do? Just a few years ago, the question likely wouldn’t even come up. Developing software calls for different abilities. Until recently, a small team had to choose: either make the movie or build the tools. AI changes that, since it enables small teams to “hire” virtual software developers. Of course, this principle extends beyond movies: anything that can be represented symbolically is in scope. And it’s not just creative abilities, such as writing, playing music, or drawing, but also more other business functions such as scheduling, legal consultations, financial transactions, etc. We’re not there yet. But if trends hold, we’ll soon see agent-driven systems do for other kinds of businesses what Blender did for Dream Well Studio. Have you dreamed of making a niche digital product to scratch an itch? That’s possible now. Soon, you’ll be able to build a business around it quickly, easily, and without needing lots of other humans in the mix. Many people have lost their jobs over the last three years. Those jobs likely won’t be replaced with AIs soon. But job markets aren’t on track to stability. If anything, they’re getting weirder. While it’s early days, AI promises some degree of resiliency. For people with entrepreneurial drive, it’s an exciting time: we can take ideas from vision to execution faster, cheaper, and at greater scale than ever. For others, it’ll be unsettling – or outright scary. We’re about to see a major shift in who can create, innovate, and compete in the market. The next big thing might not come from a giant company, but from a small team – or even an individual – using AI-powered tools. I expect an entrepreneurial surge driven by necessity and opportunity. How will you adapt?

13 hours ago 1 votes
So much time and money for a minor vibe shift

Gary Marcus: Hot Take: GPT 4.5 Is a Nothing Burger Half a trillion dollars later, there is still no viable business model, profits are modest at best for everyone except Nvidia and some consulting forms, there’s still basically no moat, and still no GPT-5. Any reasonable person

an hour ago 1 votes
This vending machine draws generative art for just a euro

If you hear the term “generative art” today, you probably subconsciously add “AI” to the beginning without even thinking about it. But generative art techniques existed long before modern AI came along — they even predate digital computing altogether. Despite that long history, generative art remains interesting as consumers attempt to identify patterns in the […] The post This vending machine draws generative art for just a euro appeared first on Arduino Blog.

yesterday 2 votes