Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
103
For more than 50 years, Deep Space Station 43 has been an invaluable tool for space probes as they explore our solar system and push into the beyond. The DSS-43 radio antenna, located at the Canberra Deep Space Communication Complex, near Canberra, Australia, keeps open the line of communication between humans and probes during NASA missions. Today more than 40 percent of all data retrieved by celestial explorers, including Voyagers, New Horizons, and the Mars Curiosity rover, comes through DSS-43. “As Australia’s largest antenna, DSS-43 has provided two-way communication with dozens of robotic spacecraft,” IEEE President-Elect Kathleen Kramer said during a ceremony where the antenna was recognized as an IEEE Milestone. It has supported missions, Kramer noted, “from the Apollo program and NASA’s Mars exploration rovers such as Spirit and Opportunity to the Voyagers’ grand tour of the solar system. “In fact,” she said, “it is the only antenna remaining on Earth capable of communicating...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from IEEE Spectrum

The Data Reveals Top Patent Portfolios

Eight years is a long time in the world of patents. When we last published what we then called the Patent Power Scorecard, in 2017, it was a different technological and social landscape—Google had just filed a patent application on the transformer architecture, a momentous advance that spawned the generative AI revolution. China was just beginning to produce quality, affordable electric vehicles at scale. And the COVID pandemic wasn’t on anyone’s dance card. Eight years is also a long time in the world of magazines, where we regularly play around with formats for articles and infographics. We now have more readers online than we do in print, so our art team is leveraging advances in interactive design software to make complex datasets grokkable at a glance, whether you’re on your phone or flipping through the pages of the magazine. The scorecard’s return in this issue follows the return last month of The Data, which ran as our back page for several years; it’s curated by a different editor every month and edited by Editorial Director for Content Development Glenn Zorpette. As we set out to recast the scorecard for this decade, we sought to strike the right balance between comprehensiveness and clarity, especially on a mobile-phone screen. As our Digital Product Designer Erik Vrielink, Assistant Editor Gwendolyn Rak, and Community Manager Kohava Mendelsohn explained to me, they wanted something that would be eye-catching while avoiding information overload. The solution they arrived at—a dynamic sunburst visualization—lets readers grasp the essential takeaways at glance in print, while the digital version, allows readers to dive as deep as they want into the data. Working with sci-tech-focused data-mining company 1790 Analytics, which we partnered with on the original Patent Power Scorecard, the team prioritized three key metrics or characteristics: patent Pipeline Power (which goes beyond mere quantity to assess quality and impact), number of patents, and the country where companies are based. This last characteristic has become increasingly significant as geopolitical tensions reshape the global technology landscape. As 1790 Analytics cofounders Anthony Breitzman and Patrick Thomas note, the next few years could be particularly interesting as organizations adjust their patenting strategies in response to changing market access. Some trends leap out immediately. In consumer electronics, Apple dominates Pipeline Power despite having a patent portfolio one-third the size of Samsung’s—a testament to the Cupertino company’s focus on high-impact innovations. The aerospace sector has seen dramatic consolidation, with RTX (formerly Raytheon Technologies) now encompassing multiple subsidiaries that appear separately on our scorecard. And in the university rankings, Harvard has seized the top spot from traditional tech powerhouses like MIT and Stanford, driven by patents that are more often cited as prior art in other recent patents. And then there are the subtle shifts that become apparent only when you dig deeper into the data. The rise of SEL (Semiconductor Energy Laboratory) over TSMC (Taiwan Semiconductor Manufacturing Co.) in semiconductor design, despite having far fewer patents, suggests again that true innovation isn’t just about filing patents—it’s about creating technologies that others build upon. Looking ahead, the real test will be how these patent portfolios translate into actual products and services. Patents are promises of innovation; the scorecard helps us see what companies are making those promises and the R&D investments to realize them. As we enter an era when technological leadership increasingly determines economic and strategic power, understanding these patterns is more crucial than ever.

3 days ago 3 votes
The Data Reveals Top Patent Portfolios

Eight years is a long time in the world of patents. When we last published what we then called the Patent Power Scorecard, in 2017, it was a different technological and social landscape—Google had just filed a patent application on the transformer architecture, a momentous advance that spawned the generative AI revolution. China was just beginning to produce quality, affordable electric vehicles at scale. And the COVID pandemic wasn’t on anyone’s dance card. Eight years is also a long time in the world of magazines, where we regularly play around with formats for articles and infographics. We now have more readers online than we do in print, so our art team is leveraging advances in interactive design software to make complex datasets grokkable at a glance, whether you’re on your phone or flipping through the pages of the magazine. The scorecard’s return in this issue follows the return last month of The Data, which ran as our back page for several years; it’s curated by a different editor every month and edited by Editorial Director for Content Development Glenn Zorpette. As we set out to recast the scorecard for this decade, we sought to strike the right balance between comprehensiveness and clarity, especially on a mobile-phone screen. As our Digital Product Designer Erik Vrielink, Assistant Editor Gwendolyn Rak, and Community Manager Kohava Mendelsohn explained to me, they wanted something that would be eye-catching while avoiding information overload. The solution they arrived at—a dynamic sunburst visualization—lets readers grasp the essential takeaways at glance in print, while the digital version, allows readers to dive as deep as they want into the data. Working with sci-tech-focused data-mining company 1790 Analytics, which we partnered with on the original Patent Power Scorecard, the team prioritized three key metrics or characteristics: patent Pipeline Power (which goes beyond mere quantity to assess quality and impact), number of patents, and the country where companies are based. This last characteristic has become increasingly significant as geopolitical tensions reshape the global technology landscape. As 1790 Analytics cofounders Anthony Breitzman and Patrick Thomas note, the next few years could be particularly interesting as organizations adjust their patenting strategies in response to changing market access. Some trends leap out immediately. In consumer electronics, Apple dominates Pipeline Power despite having a patent portfolio one-third the size of Samsung’s—a testament to the Cupertino company’s focus on high-impact innovations. The aerospace sector has seen dramatic consolidation, with RTX (formerly Raytheon Technologies) now encompassing multiple subsidiaries that appear separately on our scorecard. And in the university rankings, Harvard has seized the top spot from traditional tech powerhouses like MIT and Stanford, driven by patents that are more often cited as prior art in other recent patents. And then there are the subtle shifts that become apparent only when you dig deeper into the data. The rise of SEL (Semiconductor Energy Laboratory) over TSMC (Taiwan Semiconductor Manufacturing Co.) in semiconductor design, despite having far fewer patents, suggests again that true innovation isn’t just about filing patents—it’s about creating technologies that others build upon. Looking ahead, the real test will be how these patent portfolios translate into actual products and services. Patents are promises of innovation; the scorecard helps us see what companies are making those promises and the R&D investments to realize them. As we enter an era when technological leadership increasingly determines economic and strategic power, understanding these patterns is more crucial than ever.

3 days ago 2 votes
This Little Mars Rover Stayed Home

Sojourner sent back photos of the Martian surface during the summer of 1997. I was not alone. The servers at NASA’s Jet Propulsion Lab slowed to a crawl when they got more than 47 million hits (a record number!) from people attempting to download those early images of the Red Planet. To be fair, it was the late 1990s, the Internet was still young, and most people were using dial-up modems. By the end of the 83-day mission, Sojourner had sent back 550 photos and performed more than 15 chemical analyses of Martian rocks and soil. Sojourner, of course, remains on Mars. Pictured here is Marie Curie, its twin. Functionally identical, either one of the rovers could have made the voyage to Mars, but one of them was bound to become the famous face of the mission, while the other was destined to be left behind in obscurity. Did I write this piece because I feel a little bad for Marie Curie? Maybe. But it also gave me a chance to revisit this pioneering Mars mission, which established that robots could effectively explore the surface of planets and captivate the public imagination. Sojourner’s sojourn on Mars On 4 July 1997, the Mars Pathfinder parachuted through the Martian atmosphere and bounced about 15 times on glorified airbags before finally coming to a rest. The lander, renamed the Carl Sagan Memorial Station, carried precious cargo stowed inside. The next day, after the airbags retracted, the solar-powered Sojourner eased its way down the ramp, the first human-made vehicle to roll around on the surface of another planet. (It wasn’t the first extraterrestrial body, though. The Soviet Lunokhod rovers conducted two successful missions on the moon in 1970 and 1973. The Soviets had also landed a rover on Mars back in 1971, but communication was lost before the PROP-M ever deployed.) This giant sandbox at JPL provided Marie Curie with an approximation of Martian terrain. Mike Nelson/AFP/Getty Images Sojourner was equipped with three low-resolution cameras (two on the front for black-and-white images and a color camera on the rear), a laser hazard–avoidance system, an alpha-proton X-ray spectrometer, experiments for testing wheel abrasion and material adherence, and several accelerometers. The robot also demonstrated the value of the six-wheeled “rocker-bogie” suspension system that became NASA’s go-to design for all later Mars rovers. Sojourner never roamed more than about 12 meters from the lander due to the limited range of its radio. Pathfinder had landed in Ares Vallis, an assumed ancient floodplain chosen because of the wide variety of rocks present. Scientists hoped to confirm the past existence of water on the surface of Mars. Sojourner did discover rounded pebbles that suggested running water, and later missions confirmed it. A highlight of Sojourner’s 83-day mission on Mars was its encounter with a rock nicknamed Barnacle Bill [to the rover’s left]. JPL/NASA Sojourner rolled forward 36 centimeters and encountered a rock, dubbed Barnacle Bill due to its rough surface. The rover spent about 10 hours analyzing the rock, using its spectrometer to determine the elemental composition. Over the next few weeks, while the lander collected atmospheric information and took photos, the rover studied rocks in detail and tested the Martian soil. Marie Curie’s sojourn…in a JPL sandbox Meanwhile back on Earth, engineers at JPL used Marie Curie to mimic Sojourner’s movements in a Mars-like setting. During the original design and testing of the rovers, the team had set up giant sandboxes, each holding thousands of kilograms of playground sand, in the Space Flight Operations Facility at JPL. They exhaustively practiced the remote operation of Sojourner, including an 11-minute delay in communications between Mars and Earth. (The actual delay can vary from 7 to 20 minutes.) Even after Sojourner landed, Marie Curie continued to help them strategize. Initially, Sojourner was remotely operated from Earth, which was tricky given the lengthy communication delay. Mike Nelson/AFP/Getty Images Sojourner was maneuvered by an Earth-based operator wearing 3D goggles and using a funky input device called a Spaceball 2003. Images pieced together from both the lander and the rover guided the operator. It was like a very, very slow video game—the rover sometimes moved only a few centimeters a day. NASA then turned on Sojourner’s hazard-avoidance system, which allowed the rover some autonomy to explore its world. A human would suggest a path for that day’s exploration, and then the rover had to autonomously avoid any obstacles in its way, such as a big rock, a cliff, or a steep slope. Sojourner to operate for a week. But the little rover that could kept chugging along for 83 Martian days before NASA finally lost contact, on 7 October 1997. The lander had conked out on 27 September. In all, the mission collected 1.2 gigabytes of data (which at the time was a lot) and sent back 10,000 images of the planet’s surface. Marie Curie with the hopes of sending it on another mission to Mars. For a while, it was slated to be part of the Mars 2001 set of missions, but that didn’t happen. In 2015, JPL transferred the rover to the Smithsonian’s National Air and Space Museum. When NASA Embraced Faster, Better, Cheaper The Pathfinder mission was the second one in NASA administrator Daniel S. Goldin’s Discovery Program, which embodied his “faster, better, cheaper” philosophy of making NASA more nimble and efficient. (The first Discovery mission was to the asteroid Eros.) In the financial climate of the early 1990s, the space agency couldn’t risk a billion-dollar loss if a major mission failed. Goldin opted for smaller projects; the Pathfinder mission’s overall budget, including flight and operations, was capped at US $300 million. RELATED: How NASA Built Its Mars Rovers In his 2014 book Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus), science writer Rod Pyle interviews Rob Manning, chief engineer for the Pathfinder mission and subsequent Mars rovers. Manning recalled that one of the best things about the mission was its relatively minimal requirements. The team was responsible for landing on Mars, delivering the rover, and transmitting images—technically challenging, to be sure, but beyond that the team had no constraints. Sojourner was succeeded by the rovers Spirit, Opportunity, and Curiosity. Shown here are four mission spares, including Marie Curie [foreground]. JPL-Caltech/NASA Sojourner’s electronics warm enough to operate were leftover spares from the Galileo mission to Jupiter, so they were “free.” Pathfinder mission successful but it captured the hearts of Americans and reinvigorated an interest in exploring Mars. In the process, it set the foundation for the future missions that allowed the rovers Spirit, Opportunity, and Curiosity (which, incredibly, is still operating nearly 13 years after it landed) to explore even more of the Red Planet. How the rovers Sojourner and Marie Curie got their names To name its first Mars rovers, NASA launched a student contest in March 1994, with the specific guidance of choosing a “heroine.” Entry essays were judged on their quality and creativity, the appropriateness of the name for a rover, and the student’s knowledge of the woman to be honored as well as the mission’s goals. Students from all over the world entered. Sojourner Truth, while 18-year-old Deepti Rohatgi of Rockville, Md., came in second for hers on Marie Curie. Truth was a Black woman born into slavery at the end of the 18th century. She escaped with her infant daughter and two years later won freedom for her son through legal action. She became a vocal advocate for civil rights, women’s rights, and alcohol temperance. Curie was a Polish-French physicist and chemist famous for her studies of radioactivity, a term she coined. She was the first woman to win a Nobel Prize, as well as the first person to win a second Nobel. Nancy Grace Roman, the space agency’s first chief of astronomy. In May 2020, NASA announced it would name the Wide Field Infrared Survey Telescope after Roman; the space telescope is set to launch as early as October 2026, although the Trump administration has repeatedly said it wants to cancel the project. A Trillion Rogue Planets and Not One Sun to Shine on Them its naming policy in December 2022 after allegations came to light that James Webb, for whom the James Webb Space Telescope is named, had fired LGBTQ+ employees at NASA and, before that, the State Department. A NASA investigation couldn’t substantiate the allegations, and so the telescope retained Webb’s name. But the bar is now much higher for NASA projects to memorialize anyone, deserving or otherwise. (The agency did allow the hopping lunar robot IM-2 Micro Nova Hopper, built by Intuitive Machines, to be named for computer-software pioneer Grace Hopper.) Marie Curie and Sojourner will remain part of a rarefied clique. Sojourner, inducted into the Robot Hall of Fame in 2003, will always be the celebrity of the pair. And Marie Curie will always remain on the sidelines. But think about it this way: Marie Curie is now on exhibit at one of the most popular museums in the world, where millions of visitors can see the rover up close. That’s not too shabby a legacy either. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the June 2025 print issue. References Curator Matthew Shindell of the National Air and Space Museum first suggested I feature Marie Curie. I found additional information from the museum’s collections website, an article by David Kindy in Smithsonian magazine, and the book After Sputnik: 50 Years of the Space Age (Smithsonian Books/HarperCollins, 2007) by Smithsonian curator Martin Collins. NASA has numerous resources documenting the Mars Pathfinder mission, such as the mission website, fact sheet, and many lovely photos (including some of Barnacle Bill and a composite of Marie Curie during a prelaunch test). Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus, 2014) by Rod Pyle and Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hyperion, 2005) by planetary scientist Steve Squyres are both about later Mars missions and their rovers, but they include foundational information about Sojourner.

4 days ago 5 votes
32 Bits That Changed Microprocessor Design

In the late 1970s, a time when 8-bit processors were state of the art and CMOS was the underdog of semiconductor technology, engineers at AT&T’s Bell Labs took a bold leap into the future. They made a high-stakes bet to outpace IBM, Intel, and other competitors in chip performance by combining cutting-edge 3.5-micron CMOS fabrication with a novel 32-bit processor architecture. Although their creation—the Bellmac-32 microprocessor—never achieved the commercial fame of earlier ones such as Intel’s 4004 (released in 1971), its influence has proven far more enduring. Virtually every chip in smartphones, laptops, and tablets today relies on the complementary metal-oxide semiconductor principles that the Bellmac-32 pioneered. As the 1980s approached, AT&T was grappling with transformation. For decades, the telecom giant—nicknamed “Ma Bell”—had dominated American voice communications, with its Western Electric subsidiary manufacturing nearly every telephone found in U.S. homes and offices. The U.S. federal government was pressing for antitrust-driven divestiture, but AT&T was granted an opening to expand into computing. With computing firms already entrenched in the market, AT&T couldn’t afford to play catch-up; its strategy was to leap ahead, and the Bellmac-32 was its springboard. The Bellmac-32 chip series has now been honored with an IEEE Milestone. Dedication ceremonies are slated to be held this year at the Nokia Bell Labs’ campus in Murray Hill, N.J., and at the Computer History Museum in Mountain View, Calif. A chip like no other Rather than emulate the industry standard of 8-bit chips, AT&T executives challenged their Bell Labs engineers to deliver something revolutionary: the first commercially viable microprocessor capable of moving 32 bits in one clock cycle. It would require not just a new chip but also an entirely novel architecture—one that could handle telecommunications switching and serve as the backbone for future computing systems. “We weren’t just building a faster chip,” says Michael Condry, who led the architecture team at Bell Labs’ Holmdel facility in New Jersey. “We were trying to design something that could carry both voice and computation into the future.” This configuration of the Bellmac-32 microprocessor had an integrated memory management unit optimized for Unix-like operating systems.AT&T Archives and History Center At the time, CMOS technology was seen as a promising—but risky—alternative to the NMOS and PMOS designs then in use. NMOS chips, which relied solely on N-type transistors, were fast but power-hungry. PMOS chips, which depend on the movement of positively-charged holes, were too slow. CMOS, with its hybrid design, offered the potential for both speed and energy savings. The benefits were so compelling that the industry soon saw that the need for double the number of transistors (NMOS and PMOS for each gate) was worth the tradeoff. As transistor sizes shrank along with the rapid advancement of semiconductor technology described by Moore’s Law, the cost of doubling up the transistor density soon became manageable and eventually became negligible. But when Bell Labs took its high-stakes gamble, large-scale CMOS fabrication was still unproven and looked to be comparatively costly. That didn’t deter Bell Labs. By tapping expertise from its campuses in Holmdel and Murray Hill as well as in Naperville, Ill., the company assembled a dream team of semiconductor engineers. The team included Condry; Sung-Mo “Steve” Kang, a rising star in chip design; Victor Huang, another microprocessor chip designer, and dozens of AT&T Bell Labs employees. They set out in 1978 to master a new CMOS process and create a 32-bit microprocessor from scratch. Designing the architecture The architecture group led by Condry, an IEEE Life Fellow who would later become Intel’s CTO, focused on building a system that would natively support the Unix operating system and the C programming language. Both were in their infancy but destined for dominance. To cope with the era’s memory limitations—kilobytes were precious—they introduced a complex instruction set that required fewer steps to carry out and could be executed in a single clock cycle. The engineers also built the chip to support the VersaModule Eurocard (VME) parallel bus, enabling distributed computing so several nodes could handle data processing in parallel. Making the chip VME-enabled also allowed it to be used for real-time control. The group wrote its own version of Unix, with real-time capabilities to ensure that the new chip design was compatible with industrial automation and similar applications. The Bell Labs engineers also invented domino logic, which ramped up processing speed by reducing delays in complex logic gates. Additional testing and verification techniques were developed and introduced via the Bellmac-32 Module, a sophisticated multi-chipset verification and testing project led by Huang that allowed the complex chip fabrication to have zero or near-zero errors. This was the first of its kind in VLSI testing. The Bell Labs engineers’ systematic plan for double- and triple-checking their colleagues’ work ultimately made the total design of the multiple chipset family work together seamlessly as a complete microcomputer system. Then came the hardest part: actually building the chip. Floor maps and colored pencils “The technology for layout, testing, and high-yield fabrication just wasn’t there,” recalls Kang, an IEEE Life Fellow who later became president of the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, South Korea. With no CAD tools available for full-chip verification, Kang says, the team resorted to printing oversize Calcomp plots. The schematics showed how the transistors, circuit lines, and interconnects should be arranged inside the chip to provide the desired outputs. The team assembled them on the floor with adhesive tape to create a massive square map more than 6 meters on a side. Kang and his colleagues traced every circuit by hand with colored pencils, searching for breaks, overlaps, or mishandled interconnects. Getting it made Once the physical design was locked in, the team faced another obstacle: manufacturing. The chips were fabricated at a Western Electric facility in Allentown, Pa., but Kang recalls that the yield rates (the percentage of chips on a silicon wafer that meet performance and quality standards) were dismal. To address that, Kang and his colleagues drove from New Jersey to the plant each day, rolled up their sleeves, and did whatever it took, including sweeping floors and calibrating test equipment, to build camaraderie and instill confidence that the most complicated product the plant workers had ever attempted to produce could indeed be made there. “We weren’t just building a faster chip. We were trying to design something that could carry both voice and computation into the future.” —Michael Condry, Bellmac-32 architecture team lead “The team-building worked out well,” Kang says. “After several months, Western Electric was able to produce more than the required number of good chips.” The first version of the Bellmac-32, which was ready by 1980, fell short of expectations. Instead of hitting a 4-megahertz performance target, it ran at just 2 MHz. The engineers discovered that the state-of-the-art Takeda Riken testing equipment they were using was flawed, with transmission-line effects between the probe and the test head leading to inaccurate measurements, so they worked with a Takeda Riken team to develop correction tables that rectified the measurement errors. The second generation of Bellmac chips had clock speeds that exceeded 6.2 MHz, sometimes reaching 9. That was blazing fast for its time. The 16-bit Intel 8008 processor inside IBM’s original PC released in 1981 ran at 4.77 MHz. Why Bellmac-32 didn’t go mainstream Despite its technical promise, the Bellmac-32 did not find wide commercial use. According to Condry, AT&T’s pivot toward acquiring equipment manufacturer NCR, which it began eyeing in the late 1980s, meant the company chose to back a different line of chips. But by then, the Bellmac-32’s legacy was already growing. “Before Bellmac-32, NMOS was dominant,” Condry says. “But CMOS changed the market because it was shown to be a more effective implementation in the fab.” In time, that realization reshaped the semiconductor landscape. CMOS would become the foundation for modern microprocessors, powering the digital revolution in desktops, smartphones, and more. The audacity of Bell Labs’ bet—to take an untested fabrication process and leapfrog an entire generation of chip architecture—stands as a landmark moment in technological history. As Kang puts it: “We were on the frontier of what was possible. We didn’t just follow the path—we made a new one.” Huang, an IEEE Life Fellow who later became deputy director of the Institute of Microelectronics, Singapore, adds: “This included not only chip architecture and design, but also large-scale chip verification—with CAD but without today’s digital simulation tools or even breadboarding [which is the standard method for checking whether a circuit design for an electronic system that uses chips works before making permanent connections by soldering the circuit elements together].” Condry, Kang, and Huang look back fondly on that period and express their admiration for the many AT&T employees whose skill and dedication made the Bellmac-32 chip series possible. Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE North Jersey Section sponsored the nomination.

a week ago 47 votes
Teething Babies and Rainy Days Once Cut Calls Short

Humans are messy. We spill drinks, smudge screens, and bring our electronic devices into countless sticky situations. As anyone who has accidentally dropped their phone into a toilet or pool knows, moisture poses a particular problem. And it’s not a new one: From early telephones to modern cellphones, everyday liquids have frequently conflicted with devices that must stay dry. Consumers often take the blame when leaks and spills inevitably occur. Rachel Plotnick, an associate professor of cinema and media studies at Indiana University Bloomington, studies the relationship between technology and society. Last year, she spoke to IEEE Spectrum about her research on how people interact with buttons and tactile controls. In her new book, License to Spill: Where Dry Devices Meet Liquid Lives (The MIT Press, 2025), Plotnick explores the dynamic between everyday wetness and media devices through historical and contemporary examples, including cameras, vinyl records, and laptops. This adapted excerpt looks back at analog telephones of the 1910s through 1930s, the common practices that interrupted service, and the “trouble men” who were sent to repair phones and reform messy users. Boston Daily Globe in 1908 recounted, for instance, how a mother only learned her lesson about her baby’s cord chewing when the baby received a shock—or “got stung”—and the phone service went out. These youthful oral fixations rarely caused harm to the chewer, but were “injurious” to the telephone cord. License to Spill is Rachel Plotnick’s second book. Her first, Power Button: A History of Pleasure, Panic, and the Politics of Pushing (The MIT Press, 2018), explores the history and politics of push buttons. The MIT Press Telephony. Painters washed ceilings, which dripped; telephones sat near windows during storms; phone cords came in contact with moist radiators. A telephone chief operator who handled service complaints recounted that “a frequent combination in interior decoration is the canary bird and desk telephone occupying the same table. The canary bird includes the telephone in his morning bath,” thus leading to out-of-order service calls. housewife” who damaged wiring by scrubbing her telephone with water or cleaning fluid, and men in offices who dangerously propped their wet umbrellas against the wire. Wetness lurked everywhere in people’s spaces and habits; phone companies argued that one could hardly expect proper service under such circumstances—especially if users didn’t learn to accommodate the phone’s need for dryness. This differing appraisal of liquids caused problems when telephone customers expected service that would not falter and directed outrage at their provider when outages did occur. Consumers even sometimes admitted to swearing at the telephone receiver and haranguing operators. Telephone company employees, meanwhile, faced intense scrutiny and pressure to tend to telephone infrastructures. “Trouble” took two forms, then, in dealing with customers’ frustration over outages and in dealing with the damage from the wetness itself. The Original Troubleshooters Telephone breakdowns required determinations about the outage’s source. “Trouble men” and “trouble departments” hunted down the probable cause of the damage, which meant sussing out babies, sponges, damp locations, spills, and open windows. If customers wanted to lay blame at workers’ feet in these moments, then repairers labeled customers as abusers of the phone cord. One author attributed at least 50 percent of telephone trouble to cases where “someone has been careless or neglectful.” Trouble men employed medical metaphors to describe their work, as in “he is a physician, and he makes the ills that the telephone is heir to his life study.” Serge Bloch Even if a consumer knew the cord had gotten wet, they didn’t necessarily blame it as the cause of the outage. The repairer often used this as an opportunity to properly socialize the user about wetness and inappropriate telephone treatment. These conversations didn’t always go well: A 1918 article in Popular Science Monthly described an explosive argument between an infuriated woman and a phone company employee over a baby’s cord habits. The permissive mother and teething child had become emblematic of misuse, a photograph of them appearing in Bell Telephone News in 1917 as evidence of common trouble that a telephone (and its repairer) might encounter. However, no one blamed the baby; telephone workers unfailingly held mothers responsible as “bad” users. Teething babies and the mothers that let them play with phone cords were often blamed for telephone troubles. The Telephone Review/License to Spill Armed with such a tool, repairers glorified their own expertise. One wire chief was celebrated as the “original ‘find-out artist’” who could determine a telephone’s underlying troubles even in tricky cases. Telephone company employees leveraged themselves as experts who could attribute wetness’s causes to—in their estimation—uneducated (and even dimwitted) customers, who were often female. Women were often the earliest and most engaged phone users, adopting the device as a key mechanism for social relations, and so they became an easy target. Cost of Wet Phone Cord Repairs Though the phone industry and repairers were often framed as heroes, troubleshooting took its toll on overextended phone workers, and companies suffered a financial burden from repairs. One estimate by the American Telephone and Telegraph Company found that each time a company “clear[ed] wet cord trouble,” it cost a dollar. Phone companies portrayed the telephone as a fragile device that could be easily damaged by everyday life, aiming to make the subscriber a proactively “dry” and compliant user. Everyday sources of wetness, including mops and mustard, could cause hours of phone interruption. Telephony/License to Spill Moisture-Proofing Telephone Cords Although telephone companies put significant effort into reforming their subscribers, the increasing pervasiveness of telephony began to conflict with these abstinent aims. Thus, a new technological solution emerged that put the burden on moisture-proofing the wire. The Stromberg-Carlson Telephone Manufacturing Co. of Rochester, N.Y., began producing copper wire that featured an insulating enamel, two layers of silk, the company’s moisture-proof compound, and a layer of cotton. Called Duratex, the cord withstood a test in which the manufacturer submerged it in water for 48 hours. In its advertising, Stromberg-Carlson warned that many traditional cords—even if they seemed to dry out after wetting—had sustained interior damage so “gradual that it is seldom noticed until the subscriber complains of service.” Serge Bloch The Pickwick Papers, with his many layers of clothing. The product’s hardiness would allow the desk telephone to “withstand any climate,” even one hostile to communication technology. This subtle change meant that the burden to adapt fell to the device rather than the user. As telephone wires began to “penetrate everywhere,” they were imagined as fostering constant and unimpeded connectivity that not even saliva or a spilled drink could interrupt. The move to cord protection was not accompanied by a great deal of fanfare, however. As part of telephone infrastructure, cords faded into the background of conversations. Excerpted from License to Spill by Rachel Plotnick. Reprinted with permission from The MIT Press. Copyright 2025.

2 weeks ago 16 votes

More in science

Why are Smokestacks So Tall?

[Note that this article is a transcript of the video embedded above.] “The big black stacks of the Illium Works of the Federal Apparatus Corporation spewed acid fumes and soot over the hundreds of men and women who were lined up before the red-brick employment office.” That’s the first line of one of my favorite short stories, written by Kurt Vonnegut in 1955. It paints a picture of a dystopian future that, thankfully, didn’t really come to be, in part because of those stacks. In some ways, air pollution is kind of a part of life. I’d love to live in a world where the systems, materials and processes that make my life possible didn’t come with any emissions, but it’s just not the case... From the time that humans discovered fire, we’ve been methodically calculating the benefits of warmth, comfort, and cooking against the disadvantages of carbon monoxide exposure and particulate matter less than 2.5 microns in diameter… Maybe not in that exact framework, but basically, since the dawn of humanity, we’ve had to deal with smoke one way or another. Since, we can’t accomplish much without putting unwanted stuff into the air, the next best thing is to manage how and where it happens to try and minimize its impact on public health. Of course, any time you have a balancing act between technical issues, the engineers get involved, not so much to help decide where to draw the line, but to develop systems that can stay below it. And that’s where the smokestack comes in. Its function probably seems obvious; you might have a chimney in your house that does a similar job. But I want to give you a peek behind the curtain into the Illium Works of the Federal Apparatus Corporation of today and show you what goes into engineering one of these stacks at a large industrial facility. I’m Grady, and this is Practical Engineering. We put a lot of bad stuff in the air, and in a lot of different ways. There are roughly 200 regulated hazardous air pollutants in the United States, many with names I can barely pronounce. In many cases, the industries that would release these contaminants are required to deal with them at the source. A wide range of control technologies are put into place to clean dangerous pollutants from the air before it’s released into the environment. One example is coal-fired power plants. Coal, in particular, releases a plethora of pollutants when combusted, so, in many countries, modern plants are required to install control systems. Catalytic reactors remove nitrous oxides. Electrostatic precipitators collect particulates. Scrubbers use lime (the mineral, not the fruit) to strip away sulfur dioxide. And I could go on. In some cases, emission control systems can represent a significant proportion of the costs involved in building and operating a plant. But these primary emission controls aren’t always feasible for every pollutant, at least not for 100 percent removal. There’s a very old saying that “the solution to pollution is dilution.” It’s not really true on a global scale. Case in point: There’s no way to dilute the concentration of carbon dioxide in the atmosphere, or rather, it’s already as dilute as it’s going to get. But, it can be true on a local scale. Many pollutants that affect human health and the environment are short-lived; they chemically react or decompose in the atmosphere over time instead of accumulating indefinitely. And, for a lot of chemicals, there are concentration thresholds below which the consequences on human health are negligible. In those cases, dilution, or really dispersion, is a sound strategy to reduce their negative impacts, and so, in some cases, that’s what we do, particularly at major point sources like factories and power plants. One of the tricks to dispersion is that many plumes are naturally buoyant. Naturally, I’m going to use my pizza oven to demonstrate this. Not all, but most pollutants we care about are a result of combustion; burning stuff up. So the plume is usually hot. We know hot air is less dense, so it naturally rises. And the hotter it is, the faster that happens. You can see when I first start the fire, there’s not much air movement. But as the fire gets hotter in the oven, the plume speeds up, ultimately rising higher into the air. That’s the whole goal: get the plume high above populated areas where the pollutants can be dispersed to a minimally-harmful concentration. It sounds like a simple solution - just run our boilers and furnaces super hot to get enough buoyancy for the combustion products to disperse. The problem with the solution is that the whole reason we combust things is usually to recover the heat. So if you’re sending a lot of that heat out of the system, just because it makes the plume disperse better, you’re losing thermodynamic efficiency. It’s wasteful. That’s where the stack comes in. Let me put mine on and show you what I mean. I took some readings with the anemometers with the stack on and off. The airspeed with the stack on was around double with it off. About a meter per second compared with two. But it’s a little tougher to understand why. It’s intuitive that as you move higher in a column of fluid, the pressure goes down (since there’s less weight of the fluid above). The deeper you dive in a pool, the more pressure you feel. The higher you fly in a plane or climb a mountain, the lower the pressure. The slope of that line is proportional to a fluid’s density. You don’t feel much of a pressure difference climbing a set of stairs because air isn’t very dense. If you travel the same distance in water, you’ll definitely notice the difference. So let’s look at two columns of fluid. One is the ambient air and the other is the air inside a stack. Since it’s hotter, the air inside the stack is less dense. Both columns start at the same pressure at the bottom, but the higher you go, the more the pressure diverges. It’s kind of like deep sea diving in reverse. In water, the deeper you go into the dense water, the greater the pressure you feel. In a stack, the higher you are in a column of hot air, the more buoyant you feel compared to the outside air. This is the genius of a smoke stack. It creates this difference in pressure between the inside and outside that drives greater airflow for a given temperature. Here’s the basic equation for a stack effect. I like to look at equations like this divided into what we can control and what we can’t. We don’t get to adjust the atmospheric pressure, the outside temperature, and this is just a constant. But you can see, with a stack, an engineer now has two knobs to turn: the temperature of the gas inside and the height of the stack. I did my best to keep the temperature constant in my pizza oven and took some airspeed readings. First with no stack. Then with the stock stack. Then with a megastack. By the way, this melted my anemometer; should have seen that coming. Thankfully, I got the measurements before it melted. My megastack nearly doubled the airspeed again at around three-and-a-half meters per second versus the two with just the stack that came with the oven. There’s something really satisfying about this stack effect to me. No moving parts or fancy machinery. Just put a longer pipe and you’ve fundamentally changed the physics of the whole situation. And it’s a really important tool in the environmental engineer’s toolbox to increase airflow upward, allowing contaminants to flow higher into the atmosphere where they can disperse. But this is not particularly revolutionary… unless you’re talking about the Industrial Revolution. When you look at all the pictures of the factories in the 19th century, those stacks weren’t there to improve air quality, if you can believe it. The increased airflow generated by a stack just created more efficient combustion for the boilers and furnaces. Any benefits to air quality in the cities were secondary. With the advent of diesel and electric motors, we could use forced drafts, reducing the need for a tall stack to increase airflow. That was kind of the decline of the forests of industrial chimneys that marked the landscape in the 19th century. But they’re obviously not all gone, because that secondary benefit of air quality turned into the primary benefit as environmental rules about air pollution became stricter. Of course, there are some practical limits that aren’t taken into account by that equation I showed. The plume cools down as it moves up the stack to the outside, so its density isn’t constant all the way up. I let my fire die down a bit so it wouldn’t melt the thermometer (learned my lesson), and then took readings inside the oven and at the top of the stack. You can see my pizza oven flue gas is around 210 degrees at the top of the mega-stack, but it’s roughly 250 inside the oven. After the success of the mega stack on my pizza oven, I tried the super-mega stack with not much improvement in airflow: about 4 meters per second. The warm air just got too cool by the time it reached the top. And I suspect that frictional drag in the longer pipe also contributed to that as well. So, really, depending on how insulating your stack is, our graph of height versus pressure actually ends up looking like this. And this can be its own engineering challenge. Maybe you’ve gotten back drafts in your fireplace at home because the fire wasn’t big or hot enough to create that large difference in pressure. You can see there are a lot of factors at play in designing these structures, but so far, all we’ve done is get the air moving faster. But that’s not the end goal. The purpose is to reduce the concentration of pollutants that we’re exposed to. So engineers also have to consider what happens to the plume once it leaves the stack, and that’s where things really get complicated. In the US, we have National Ambient Air Quality Standards that regulate six so-called “criteria” pollutants that are relatively widespread: carbon monoxide, lead, nitrogen dioxide, ozone, particulates, and sulfur dioxide. We have hard limits on all these compounds with the intention that they are met at all times, in all locations, under all conditions. Unfortunately, that’s not always the case. You can go on EPA’s website and look at the so-called “non-attainment” areas for the various pollutants. But we do strive to meet the standards through a list of measures that is too long to go into here. And that is not an easy thing to do. Not every source of pollution comes out of a big stationary smokestack where it’s easy to measure and control. Cars, buses, planes, trucks, trains, and even rockets create lots of contaminants that vary by location, season, and time of day. And there are natural processes that contribute as well. Forests and soil microbes release volatile organic compounds that can lead to ozone formation. Volcanic eruptions and wildfires release carbon monoxide and sulfur dioxide. Even dust storms put particulates in the air that can travel across continents. And hopefully you’re seeing the challenge of designing a smoke stack. The primary controls like scrubbers and precipitators get most of the pollutants out, and hopefully all of the ones that can’t be dispersed. But what’s left over and released has to avoid pushing concentrations above the standards. That design has to work within the very complicated and varying context of air chemistry and atmospheric conditions that a designer has no control over. Let me show you a demo. I have a little fog generator set up in my garage with a small fan simulating the wind. This isn’t a great example because the airflow from the fan is pretty turbulent compared to natural winds. You occasionally get some fog at the surface, but you can see my plume mainly stays above the surface, dispersing as it moves with the wind. But watch what happens when I put a building downstream. The structure changes the airflow, creating a downwash effect and pulling my plume with it. Much more frequently you see the fog at the ground level downstream. And this is just a tiny example of how complex the behavior of these plumes can be. Luckily, there’s a whole field of engineering to characterize it. There are really just two major transport processes for air pollution. Advection describes how contaminants are carried along by the wind. Diffusion describes how those contaminants spread out through turbulence. Gravity also affects air pollution, but it doesn’t have a significant effect except on heavier-than-air particulates. With some math and simplifications of those two processes, you can do a reasonable job predicting the concentration of any pollutant at any point in space as it moves and disperses through the air. Here’s the basic equation for that, and if you’ll join me for the next 2 hours, we’ll derive this and learn the meaning of each term… Actually, it might take longer than that, so let’s just look at a graphic. You can see that as the plume gets carried along by the wind, it spreads out in what’s basically a bell curve, or gaussian distribution, in the planes perpendicular to the wind direction. But even that is a bit too simplified to make any good decisions with, especially when the consequences of getting it wrong are to public health. A big reason for that is atmospheric stability. And this can make things even more complicated, but I want to explain the basics, because the effect on plumes of gas can be really dramatic. You probably know that air expands as it moves upward; there’s less pressure as you go up because there is less air above you. And as any gas expands, it cools down. So there’s this relationship between height and temperature we call the adiabatic lapse rate. It’s about 10 degrees Celsius for every kilometer up or about 28 Fahrenheit for every mile up. But the actual atmosphere doesn’t always follow this relationship. For example, rising air parcels can cool more slowly than the surrounding air. This makes them warmer and less dense, so they keep rising, promoting vertical motion in a positive feedback loop called atmospheric instability. You can even get a temperature inversion where you have cooler air below warmer air, something that can happen in the early morning when the ground is cold. And as the environmental lapse rate varies from the adiabatic lapse rate, the plumes from stacks change. In stable conditions, you usually get a coning plume, similar to what our gaussian distribution from before predicts. In unstable conditions, you get a lot of mixing, which leads to a looping plume. And things really get weird for temperature inversions because they basically act like lids for vertical movement. You can get a fanning plume that rises to a point, but then only spreads horizontally. You can also get a trapping plume, where the air gets stuck between two inversions. You can have a lofting plume, where the air is above the inversion with stable conditions below and unstable conditions above. And worst of all, you can have a fumigating plume when there are unstable conditions below an inversion, trapping and mixing the plume toward the ground surface. And if you pay attention to smokestacks, fires, and other types of emissions, you can identify these different types of plumes pretty easily. Hopefully you’re seeing now how much goes into this. Engineers have to keep track of the advection and diffusion, wind speed and direction, atmospheric stability, the effects of terrain and buildings on all those factors, plus the pre-existing concentrations of all the criteria pollutants from other sources, which vary in time and place. All that to demonstrate that your new source of air pollution is not going to push the concentrations at any place, at any time, under any conditions, beyond what the standards allow. That’s a tall order, even for someone who loves gaussian distributions. And often the answer to that tall order is an even taller smokestack. But to make sure, we use software. The EPA has developed models that can take all these factors into account to simulate, essentially, what would happen if you put a new source of pollution into the world and at what height. So why are smokestacks so tall? I hope you’ll agree with me that it turns out to be a pretty complicated question. And it’s important, right? These stacks are expensive to build and maintain. Those costs trickle down to us through the costs of the products and services we buy. They have a generally negative visual impact on the landscape. And they have a lot of other engineering challenges too, like resonance in the wind. And on the other hand, we have public health, arguably one of the most critical design criteria that can exist for an engineer. It’s really important to get this right. I think our air quality regulations do a lot to make sure we strike a good balance here. There are even rules limiting how much credit you can get for building a stack higher for greater dispersion to make sure that we’re not using excessively tall stacks in lieu of more effective, but often more expensive, emission controls and strategies. In a perfect world, none of the materials or industrial processes that we rely on would generate concentrated plumes of hazardous gases. We don’t live in that perfect world, but we are pretty fortunate that, at least in many places on Earth, air quality is something we don’t have to think too much about. And to thank for it, we have a relatively small industry of environmental professionals who do think about it, a whole lot. You know, for a lot of people, this is their whole career; what they ponder from 9-5 every day. Something most of us would rather keep out of mind, they face it head-on, developing engineering theories, professional consensus, sensible regulations, modeling software, and more - just so we can breathe easy.

20 hours ago 3 votes
AI Therapists

In the movie Blade Runner 2049 (an excellent film I highly recommend), Ryan Gosling’s character, K, has an AI “wife”, Joi, played by Ana de Armas. K is clearly in love with Joi, who is nothing but software and holograms. In one poignant scene, K is viewing a giant ad for AI companions and sees […] The post AI Therapists first appeared on NeuroLogica Blog.

22 hours ago 2 votes
The beauty of concrete

Why are buildings today austere, while buildings of the past were ornate and elaborately ornamented?

19 hours ago 1 votes
What does Innovaccer actually do? A look under the hood | Out-Of-Pocket

A conversation about EHRs, who their customers actually are, and building apps

yesterday 2 votes
Cambodian Forest Defenders at Risk for Exposing Illegal Logging

The lush forests that have long sustained Cambodia’s Indigenous people have steadily fallen to illicit logging. Now, community members face intimidation and risk arrest as they patrol their forests to document the losses and try to push the government to stop the cutting. Read more on E360 →

2 days ago 2 votes