Full Width [alt+shift+f] FOCUS MODE Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
82
In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans. But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI...
7 months ago

Comments

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from IEEE Spectrum

It’s the End of the Line for AOL’s Dial-Up Service

The last time I used a dial-up modem came sometime around 2001. Within just a few years, dial-up had exited my life, never to return. I haven’t even had a telephone line in my house for most of my adult life. But I still feel a strong tinge of sadness to know that AOL is finally retiring the ol’ hobbyhorse. At the end of September, it’s gone. The timeline is almost on-the-nose fitting: The widespread access to the Internet AOL’s service brought in the 1990s is associated with a digital phenomenon called the Eternal September. Before AOL allowed broad access to Usenet—a precursor to today’s online discussion forums—most new users appeared each September, when new college students frequently joined the platform. Thanks to AOL, they began showing up daily starting around September 1993. The fact that AOL’s dial-up is still active in the first place highlights a truism of technology: Sometimes, the important stuff sticks around well after it’s obsolete. Why AOL is ditching dial-up now It’s no surprise that dial-up has lingered for close to a quarter-century. Despite not having needed a dial-up modem myself since the summer of 2001, I was once so passionate about dial-up that I begged to get a modem for my 13th birthday. Modems are hard to shake, and not just because we fondly remember waiting so long for them to do their thing. Originally, the telephone modem was a hack. It was pushed into public consciousness partly by Deaf users who worked around the phone industry’s monopolistic regulations to develop the teletypewriter, a system to communicate over phone lines via text. Along the way, the community invented technologies like the acoustic coupler. To make that hack function, modems had to do multiple conversions in real time—from data to audio and back again, in two directions. As I put it in a piece that compared the modem to the telegraph: The modem, at least in its telephone-based forms, represents a dance between sound and data. By translating information into an aural signal, then into current, then back into an aural signal, then back into data once again, the modulation and demodulation going on is very similar to the process used with the original telegraph, albeit done manually. Modems like this one from U.S. Robotics work by converting data to audio and back again. Jphill19/Wikimedia Commons With telegraphs, the information was input by a person, translated into electric pulses, and received by another person. Modems work the same way, just without human translators. The result of all this back and forth was that modems had to give up a hell of a lot of speed to make this all work. The need to connect over a medium built for audio meant that data was at risk of getting lost over the line. (This is why error correction was an essential part of the modem’s evolution; often data needed to be shared more than once to ensure it got through. Without error correction, dial-up modems would be even slower.) Remember that sound? It marked many users’ first experience getting online.AdventuresinHD/YouTube Telephone lines were a hugely inefficient system for data because they were built for voice and heavily compressed audio. Voices are still clear and recognizable after being compressed, but audio compression can wreak havoc on data connections. Plus, there was the problem of line access. With a call, you could not easily share a connection. That meant you couldn’t make phone calls while using dial-up, leading to some homes getting a second line. And at the Internet Service Provider level, having multiple lines got very complex, very fast. The phone industry knew this, but its initial solution, ISDN, did not take off among mainstream consumers. (A later one, DSL, had better uptake, and is likely one of the few Internet options rural users currently have.) In some areas of the United States, dial-up remains the best option—the result of decades of poor investment in Internet infrastructure. So the industry moved to other solutions to get consumers Internet—coaxial cable, which was already widespread because of cable TV, and fiber, which wasn’t. The problem is, coax never reached quite as far as telephone wires did, in part because cable television wasn’t technically a utility in the way electricity or water were. In recent years, many attempts have been made to classify Internet access as a public utility, though the most recent one was struck down by an appeals court earlier this year. The public utility regulation is important. The telephone had struggled to reach rural communities in the 1930s, and only did so after a series of regulations, including one that led to the creation of the Federal Communications Commission, were put into effect. So too did electricity, which needed a dedicated law to expand its reach. But the reach of broadband is frustratingly incomplete, as highlighted by the fact that many areas of the country are not properly covered by cellular signals. And getting new wires hung can be an immensely difficult task, in part because companies that sell fiber, like Verizon and Google, often stop investing due to the high costs. (Though, to Google’s credit, it started expanding again in 2022 after a six-year rollback.) So, in some areas of the United States, dial-up remains the best option—the result of decades of poor investment in Internet infrastructure. This, for years, has propped up companies like AOL, which has evolved numerous times since it foolishly merged with Time Warner a quarter-century ago. The first PC-based client called America Online appeared on the graphical operating system GeoWorks. This screenshot shows the DOS AOL client that was distributed with GeoWorks 2.01.Ernie Smith But AOL is not the company it was. After multiple acquisitions and spin-outs, it is now a mere subsidiary of Yahoo, and it long ago transitioned into a Web-first property. Oh, it still has subscriptions, but they’re effectively fancy analogues for unnecessary security software. And their email client, while having been defeated by the likes of Gmail years ago, still has its fans. When I posted the AOL news on social media, about 90 percent of the responses were jokes or genuine notes of respect. But there was a small contingent, maybe 5 percent, that talked about how much this was going to screw over far-flung communities. I don’t think it’s AOL’s responsibility to keep this model going forever. Instead, it looks like the job is going to fall to two companies: Microsoft, whose MSN Dial-Up Internet Access costs US $179.95 per year, and the company United Online, which still operates the longtime dial-up players Juno and NetZero. Satellite Internet is also an option, with older services like HughesNet and newer ones like Starlink picking up the slack. It’s not AOL’s fault. But AOL is the face of this failing. AOL dropping dial-up is part of a long fade-out As technologies go, the dial-up modem has not lasted quite as long as the telegram, which has been active in one form or another for 181 years. But the modem, which was first used in 1958 as part of an air-defense system, has stuck around for a good 67 years. That makes it one of the oldest pieces of computer-related technology still in modern use. To give you an idea of how old that is: 1958 is also the year that the integrated circuit, an essential building block of any modern computer, was invented. The disk platter, which became the modern hard drive, was invented a year earlier. The floppy disk came a decade later. (It should be noted that the modem itself is not dying—your smartphone has one—but the connection your landline has to your modem, the really loud one, has seen better days.) The news that AOL is dropping its service might be seen as the end of the line for dial-up, but the story of the telegram hints that this may not be the case. In 2006, much hay was made about Western Union sending its final telegram. But Western Union was never the only company sending telegrams, and another company picked up the business. You can still send a telegram via International Telegram in 2025. (It’s not cheap: A single message, sent the same day, is $34, plus 75 cents per word.) In many ways, AOL dropping the service is a sign that this already niche use case is going to get more niche. But niche use cases have a way of staying relevant, given the right audience. It’s sort of like why doctors continue to use pagers. As a Planet Money episode from two years ago noted, the additional friction of using pagers worked well with the way doctors functioned, because it ensured that they knew the messages they were getting didn’t compete with anything else. Dial-up is likely never going to totally die, unless the landline phone system itself gets knocked offline, which AT&T has admittedly been itching to do. It remains one of the cheapest options to get online, outside of drinking a single coffee at a Panera and logging onto the wifi. But AOL? While dial-up may have been the company’s primary business earlier in its life, it hasn’t really been its focus in quite a long time. AOL is now a highly diversified company, whose primary focus over the past 15 years has been advertising. It still sells subscriptions, but those subscriptions are about to lose their most important legacy feature. AOL is simply too weak to support the next generation of Internet service themselves. Their inroad to broadband was supposed to be Time Warner Cable; that didn’t work out, so they pivoted to something else, but kept around the legacy business while it was still profitable. It’s likely that emerging technologies, like Microsoft’s Airband Initiative, which relies on distributing broadband over unused “white spaces” on the television dial, stand a better shot. 5G connectivity will also likely improve over time (T-Mobile already promotes its 5G home Internet as a rural option), and perhaps more satellite-based options will emerge. Technologies don’t die. They just slowly become so irrelevant that they might as well be dead. The monoculture of the AOL login experience When I posted the announcement, hidden in an obscure link on the AOL website sent to me by a colleague, it immediately went viral on Bluesky and Mastodon. That meant I got to see a lot of people react to this news in real time. Most had the same comment: I didn’t even know it was still around. Others made modem jokes, or talked about AOL’s famously terrible customer service. What was interesting was that most people said roughly the same thing about the service. That is not the case with most online experiences, which usually reflect myriad points of views. I think it speaks to the fact that while the Internet was the ultimate monoculture killer, the experience of getting online for the first time was largely monocultural. Usually, it started with a modem connecting to a phone number and dropping us into a single familiar place. We have lost a lot of Internet Service Providers over the years. Few spark the passion and memories of America Online, a network that somehow beat out more innovative and more established players to become the onramp to the Information Superhighway, for all the good and bad that represents. AOL must be embarrassed of that history. It barely even announced its closure.

6 hours ago 1 votes
The 60-Year Old Algorithm Underlying Today’s Tech

CT scanning, streaming videos, and sending images over the Internet wouldn’t be possible without the Fast Fourier transform. Commonly known as FFT, the computer algorithm designed by researchers at Princeton University and IBM is found in just about every electronic device, according to an entry in the Engineering and Technology History Wiki. Demonstrated for the first time in 1964 by IEEE Fellows John Tukey and James W. Cooley, the algorithm breaks down a signal—a series of values over time—and converts it into frequencies. FFT was 100 times faster than the existing discrete Fourier transform. The DFT also requires more memory than the FFT because it saves intermediate results while processing. The FFT has become an important tool for manipulating and analyzing signals in many areas including audio processing, telecommunications, digital broadcasting, and image analysis. It helps filter, compress, eliminate noise from, and otherwise modify signals. The 60-year-old ubiquitous computer code also has applications in today’s cutting-edge technologies such as AI, quantum computing, self-driving cars, and 5G communication systems. The FFT was commemorated with an IEEE Milestone during a ceremony held in May at Princeton University. “The Cooley-Tukey algorithm significantly accelerated the calculation of DFTs,” 2024 IEEE President Tom Coughlin said at the ceremony. “Prior methods required significantly more computations, making FFT a revolutionary breakthrough. By leveraging algebraic properties and periodicities, the FFT reduced the number of the operations, making it particularly and practically feasible for everyday tasks, replacing the less efficient analog methods.” A new mathematical tool In 1963 Tukey, a professor of mathematics and statistics at Princeton, participated in a meeting of U.S. President John F. Kennedy’s Science Advisory Committee to discuss ways to detect underground nuclear tests, according to the ETHW entry. Also attending that meeting was Richard Garwin, a physicist and engineer at IBM who played a key role in designing the first hydrogen bomb. He died in May. Read about his fascinating life in this month’s In Memoriam. Tukey told Garwin he was working on speeding up the computation of an existing method—the Fourier transform—thinking it might help with the detection. His algorithm mathematically converted a signal from its original domain, such as time or space, to a frequency domain. Garwin recognized its potential and asked IBM to select a mathematical analyst to collaborate with Tukey. That person was Cooley, a research staff member working on numerical analysis and computation projects. If the Fourier transform could be made faster, Garwin said, seismometers could be planted in the ground in countries surrounding the Soviet Union to detect nuclear explosions from atomic bomb tests, because the Soviets wouldn’t allow on-site tests, according to Cooley’s oral history in the Engineering and Technology History Wiki. A seismometer measures ground vibrations, which are converted into electrical signals and recorded as seismograms. To design sensors for underground nuclear tests, however, “you would have to process all the seismic signals, and a large part of the processing could be done by Fourier transforms,” Cooley said in his oral history. But “the computing power at the time was not enough to process all of the signals you’d need to do this.” The FFT could calculate a seismic sensor’s frequency and produce images, IEEE Life Fellow Harold S. Stone said at the Milestone event. He is an image processing researcher and Fellow emeritus at the NEC Laboratories America, in Princeton, and a former IBM researcher. Tukey and Cooley led the team that wrote the computer code that demonstrated the FFT’s power. “The demonstration of the Coley-Tukey algorithm showed that it was 100 times faster,” Stone said. “It was so fast that it could keep up with the seismic data.” Sensors using the algorithm were planted, and they detected nuclear explosions within a 15-kilometer radius from where they were detonated, according to the ETHW entry. “By leveraging algebraic properties and periodicities, the FFT reduced the number of the operations, making it particularly and practically feasible for everyday tasks, replacing the less efficient analog methods.” —2024 IEEE President Tom Coughlin In 1965 Cooley and Tukey published “An Algorithm for the Machine Calculation of Complex Fourier Series,” describing the FFT process. The seminal paper spurred development of digital signal processing technologies. For his work, Tukey was awarded a U.S. National Medal of Science in 1973. He also received the 1982 IEEE Medal of Honor for “contributions to the spectral analysis of random processes and the fast Fourier transform algorithm.” Cooley, who received the 2002 IEEE Kilby Signal Processing Medal for pioneering the FFT, was a leading figure in the field of digital signal processing. Through his involvement with the IEEE Digital Signal Processing Committee (today known as the IEEE Signal Processing Society), he helped establish terminology and suggested research directions. Although not one of the inventors, Garwin is credited with recognizing that the algorithm had wider applications, especially in scientific and engineering fields. “In today’s lingo, Garwin helped the FFT ‘go viral’ by getting Cooley and Tukey together,” Stone said. “Garwin and Tukey sought better information to forestall and prevent wars,” added Frank Anscombe, Tukey’s nephew. “The Cooley-Tukey FFT swiftly advanced this cause by giving a practical, simplifying solution for wavy data. Thanks to the FFT, a technological rubicon began to be crossed: analog-to-digital machines.” A spirit of collaboration between academia and industry Like so many innovations, the FFT came out of a collaboration between industry and academia, and it should be recognized for that, IEEE Fellow Andrea Goldsmith said at the ceremony. She explained that she regularly works with FFT in her research projects. At the time of the event, she was Princeton’s dean of engineering and applied sciences. This month she started her new position as president of Stony Brook University, in New York. “Taking the ideas we have from basic research in our university labs, talking to people in industry, and understanding how the research problems we work on can benefit industry either tomorrow or in five years or 20 years from now, is incredibly important,” she said. “Some people think of engineering as boring and dry and something that only nerds do, but there is such beauty and creativity in a lot of the innovations that we have developed, and I think the FFT is a perfect example of that.” The FFT joins more than 270 other IEEE Milestones. They are more than a marker of achievement, said IEEE Life Senior Member Bala S. Prasanna, director of IEEE Region 1. “They are a testament to human ingenuity, perseverance, and the spirit of collaboration,” Prasanna said. “These Milestones were more than just breakthroughs; they became catalysts for innovation, enabling progress in ways once thought impossible. Each one ensures that the story behind these innovations is preserved, not just as history but as inspiration for future generations.” Another ceremony was held on 11 June at the IBM Watson Research Center. Milestone plaques recognizing the FFT are on display in the lobby of Princeton’s School of Engineering and Applied Science and in the main lobby at the entrance of the IBM research center. They read: “In 1964 a computer program implementing a highly efficient Fourier analysis algorithm was demonstrated at IBM Research. Jointly developed by Princeton University and IBM collaborators, the Cooley-Tukey technique calculated discrete Fourier transforms orders of magnitude faster than had been previously demonstrated. Known as the Fast Fourier Transform (FFT), its speed impacted numerous applications including computerized tomography, audio and video compression, signal processing, and real-time data streaming.” Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE Princeton Central Jersey Section sponsored the nomination.

6 days ago 14 votes
A Cold War Kit for Surviving a Nuclear Attack

On 29 August 1949, the Soviet Union successfully tested its first nuclear weapon. Over the next year and a half, U.S. President Harry S. Truman resurrected the Office of Civilian Defense (which had been abolished at the end of World War II) and signed into law the Federal Civil Defense Act of 1950, which mobilized government agencies to plan for the aftermath of a global nuclear war. With the Cold War underway, that act kicked off a decades-long effort to ensure that at least some Americans survived nuclear armageddon. As the largest civilian federal agency with a presence throughout the country, the U.S. Post Office Department was in a unique position to monitor local radiation levels and shelter residents. By the end of 1964, approximately 1,500 postal buildings had been designated as fallout shelters, providing space and emergency supplies for 1.3 million people. Occupants were expected to remain in the shelters until the radioactivity outside was deemed safe. By 1968, about 6,000 postal employees had been trained to use radiological equipment, such as the CD V-700 pictured at top, to monitor beta and gamma radiation. And a group of postal employees organized a volunteer ham radio network to help with communications should the regular networks go down. What was civil defense in the Cold War? The basic premise of civil defense was that many people would die immediately in cities directly targeted by nuclear attacks. (Check out Alex Wellerstein’s interactive Nukemap for an estimate of casualties and impact should your hometown—or any location of your choosing—be hit.) It was the residents of other cities, suburbs, and rural communities outside the blast area that would most benefit from civil defense preparations. With enough warning, they could shelter in a shielded site and wait for the worst of the fallout to decay. Anywhere from a day or two to a few weeks after the attack, they could emerge and aid any survivors in the harder-hit areas. In 1957, a committee of the Office of Defense Mobilization drafted the report Deterrence and Survival in the Nuclear Age, for President Dwight D. Eisenhower. Better known as the Gaither Report, it called for the creation of a nationwide network of fallout shelters to protect civilians. Government publications such as The Family Fallout Shelter encouraged Americans who had the space, the resources, and the will to construct shelters for their homes. City dwellers in apartment buildings warranted only half a page in the booklet, with the suggestion to head to the basement and cooperate with other residents. This model fallout shelter from 1960 was designed for four to six people. Bettmann/Getty Images Ultimately, very few homeowners actually built a fallout shelter. But Rod Serling, creator of the television series “The Twilight Zone,” saw an opportunity for pointed social commentary. Aired in the fall of 1961, the episode “The Shelter” showed how quickly civilization (epitomized by a suburban middle-class family and their friends) broke down over decisions about who would be saved and who would not. Meanwhile, President John F. Kennedy had started to shift the national strategy from individual shelters to community shelters. At his instruction, the U.S. Army Corps of Engineers began surveying existing buildings suitable for public shelters. Post offices, especially ones with basements capable of housing at least 50 people, were a natural fit. Each postmaster general was designated as the local shelter manager and granted complete authority to operate the shelter, including determining who would be admitted or excluded. The Handbook for Fallout Shelter Management gave guidance for everything from sleeping arrangements to sanitation standards. Shelters were stocked with food and water, medicine, and, of course, radiological survey instruments. What to do in case of a nuclear attack These community fallout shelters were issued a standard kit for radiation detection. The kit came in a cardboard box that contained two radiation monitors, the CD V-700 (a Geiger counter, pictured at top) and the CD V-715 (a simple ion chamber survey meter); two cigar-size CD V-742 dosimeters, to measure a person’s total exposure while wearing the device; and a charger for the dosimeters. Also included was the Handbook for Radiological Monitors, which provided instructions on how to use the equipment and report the results. Post office fallout shelters were issued standard kits for measuring radioactivity after a nuclear attack.National Postal Museum/Smithsonian Institution The shelter radiation kit included two radiation monitors, two cigar-size dosimeters, and a charger for the dosimeters. Photoquest/Getty Images In the event of an attack, the operator would take readings with the CD V-715 at selected locations in the shelter. Then, within three minutes of finishing the indoor measurements, he would go outside and take a reading at least 25 feet (7.6 meters) from the building. If the radiation level outside was high, there were procedures for decontamination upon returning to the shelter. The “protection factor” of the shelter was calculated by dividing the outside reading by the inside reading. (Today the Federal Emergency Management Agency, FEMA, recommends a PF of at least 40 for a fallout shelter.) Operators were directed to retake the measurements and recalculate the protective factor at least once every 24 hours, or more frequently if the radiation levels changed rapidly. The CD V-700 was intended for detecting beta and gamma radiation during cleanup and decontamination operations, and also for detecting any radioactive contamination of food, water, and personnel. RELATED: DIY Gamma-Ray Spectroscopy With a Raspberry Pi Pico Each station would report their dose rates to a regional control center, so that the civil defense organization could determine when people could leave their shelter, where they could go, what routes to take, and what facilities needed decontamination. But if you’ve lived through a natural or manmade disaster, you’ll know that in the immediate aftermath, communications don’t always work so well. Indeed, the Handbook for Radiological Monitors acknowledged that a nuclear attack might disrupt communications. Luckily, the U.S. Post Office Department had a backup plan. In May 1958, Postmaster General Arthur E. Summerfield made an appeal to all postal employees who happened to be licensed amateur radio operators, to form an informal network that would provide emergency communications in the event of the collapse of telephone and telegraph networks and commercial broadcasting. The result was Post Office Net (PON), a voluntary group of ham radio operators; by 1962, about 1,500 postal employees in 43 states had signed on. That year, PON was opened up to nonemployees who had the necessary license. RELATED: The Uncertain Future of Ham Radio Although PON was never activated due to a nuclear threat, it did transmit messages during other emergencies. For example, in January 1967, after an epic blizzard blanketed Illinois and Michigan with heavy snow, the Michigan PON went into action, setting up liaisons with county weather services and relaying emergency requests, such as rescuing people stranded in vehicles on Interstate 94. A 1954 civil defense fair featured a display of amateur radios. The U.S. Post Office recruited about 1,500 employees to operate a ham radio network in the event that regular communications went down. National Archives The post office retired the network on 30 June 1974 as part of its shift away from civil defense preparedness. (A volunteer civil emergency-response ham radio network still exists, under the auspices of the American Radio Relay League.) And by 1977, laboratory tests indicated that most of the food and medicine stockpiled in post office basements was no longer fit for human consumption. In 1972 the Office of Civil Defense was replaced by the Defense Civil Preparedness Agency, which was eventually folded into FEMA. And with the end of the Cold War, the civil defense program officially ended in 1994, fortunately without ever being needed for a nuclear attack. Do we still need civil defense? The idea for this column came to me last fall, when I was doing research at the Linda Hall Library, in Kansas City, Mo., and I kept coming across articles about civil defense in magazines and journals from the 1950s and ’60s. I knew that the Smithsonian’s National Postal Museum, in Washington, D.C., had several civil defense artifacts (including the CD V-700 and a great “In Time of Emergency” public service announcement record album). As a child of the late Cold War, I remember being worried by the prospect of nuclear war. But then the Cold War ended, and so did my fears. I envisioned this month’s column capturing the intriguing history of civil defense and the earnest preparations of the era. That chapter of history, I assumed, was closed. Little did I imagine that by the time I began to write this, the prospect of a nuclear attack, if not an all-out war, would suddenly become much more real. These days, I understand the complexities and nuances of nuclear weapons much better than when I was a child. But I’m just as concerned that a nuclear conflict is imminent. Here’s hoping that history repeats itself, and it does not come to that. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the August 2025 print issue. References The November 1951 issue of Electrical Engineering summarized a civil defense conference held at the General Electric Co.’s Electronics Park in Syracuse, N.Y., earlier that year. Two hundred eighty federal, state, county, and city officials from across the United States and Canada attended, which got me thinking about the topic. Many of the government’s civil defense handbooks are available through the Internet Archive. The U.S. Postal Bulletins have also been digitized, and the USPS historian’s office wrote a great account, “The Postal Service’s Role in Civil Defense During the Cold War.” Although I’ve highlighted artifacts from the National Postal Museum, the Smithsonian Institution has many other objects across multiple museums. Eric Green has been collecting civil defense material since 1978 and has made much of it available through his virtual Civil Defense Museum. Alex Wellerstein, a historian of nuclear technology at the Stevens Institute of Technology, writes the Substack newsletter Doomsday Machines, where he gives thoughtful commentary on how we think about the end of times, in both fiction and reality. His interactive Nukemap is informative and scary.

3 weeks ago 26 votes
This 1945 TV Console Showed Two Programs at Once

As I try to write this article, my friend and I have six different screens attached to three types of devices. We’re working in the same room but on our own projects—separate yet together, a comfortable companionship. I had never really thought of the proliferation of screens as a peacekeeping tool until I stumbled across one of Allen B. DuMont’s 1950s dual-screen television sets. DuMont’s idea was to let two people in the same room watch different programs. It reminded me of my early childhood and my family’s one TV set, and the endless arguments with my sisters and parents over what to watch. Dad always won, and his choice was rarely mine. The DuMont Duoscopic Was 2 TVs in 1 Allen B. DuMont was a pioneer of commercial television in the United States. His eponymous company manufactured cathode-ray tubes and in 1938 introduced one of the earliest electronic TV sets. He understood how human nature and a shortage of TV screens could divide couples, siblings, and friends. Accordingly, he built at least two prototype TVs that could play two shows at once. In the 1945 prototype shown at top, DuMont retrofitted a maple-finished cabinet that originally held a single 15-inch Plymouth TV receiver to house two black-and-white 12-inch receivers. Separate audio could be played with or without earpieces. Viewers used a 10-turn dial to tune into TV channel 1 (which went off the air in 1948) and VHF channels 2 through 13. As radio was still much more popular than television, the dial also included FM from 88 to 108 megahertz, plus a few channels used for weather and aviation. The lower left drawer held a phonograph. It was an all-in-one entertainment center. To view their desired programs on the DuMont Duoscopic TV set, this family wore polarized glasses and listened through earpieces.Allen DuMont/National Museum of American History/Smithsonian In 1954, DuMont introduced a different approach. With the DuMont Duoscopic, two different channels were broadcast on a single screen. To the naked eye, the images appeared superimposed on one another. But a viewer who wore polarized glasses or looked at the screen through a polarized panel saw just one of the images. Duoscopic viewers could use an earpiece to listen to the audio of their choice. You could also use the TV set to watch a single program by selecting only one channel and playing the audio through one speaker. DuMont seemed committed to the idea that family members should spend time together, even if they were engaged in different activities. An image of the Duoscopic sent out by the Associated Press Wirephoto Service heralded “No more lonely nights for the missus.” According to the caption, she could join “Hubby,” who was already relaxing in his comfy armchair enjoying his favorite show, but now watch something of her own choosing. “Would you believe it?” a Duoscopic brochure asks. “While HE sees and hears the fights, SHE sees and hears her play…. Separate viewing and solo sound allows your family a choice.” The technology to separate and isolate the images and audio was key. The Duoscopic had two CRTs, each with its own feed, set at right angles to each other. A half-silvered mirror superimposed the two images onto a single screen, which could then be filtered with polarized glasses or screens. TV pioneer Allen B. DuMont designed and manufactured cathode ray tubes and TV sets and launched an early TV network.Science History Images/Alamy A separate box could be conveniently placed nearby to control the volume of each program. Users could toggle between the two programs with the flick of a switch. Each set came with eight earpieces with long cords. A short note in the March 1954 issue of Electrical Engineering praises the engineers who crafted the sound system to eliminate sound bleed from the speakers. It notes that a viewer “very easily could watch one television program and listen to the audio content of a second.” Or, as a United Press piece published in the Panama City News Herald suggested, part of the family could use the earpieces to watch and listen to the TV while others in the room could “read, play bridge, or just sit and brood.” I suspect the brooders were the children who still didn’t get to watch their favorite show. Of course, choice was a relative matter. In the 1950s, many U.S. television markets were lucky to have even two channels. Only in major metropolitan areas were there more programming options. The only known example of DuMont’s side-by-side version resides at the South Carolina State Museum, in Columbia. But sources indicate that DuMont planned to manufacture about 30 Duoscopics for demonstration purposes, although it’s unclear how many were actually made. (The Smithsonian’s National Museum of American History has a Duoscopic in its collections.) Alas, neither version ever went into mainstream production. Perhaps that’s because the economics didn’t make sense: Even in the early 1950s, it would have been easier and cheaper for families to simply purchase two television sets and watch them in different rooms. Who Was Early TV Pioneer Allen DuMont? DuMont is an interesting figure in the history of television because he was actively engaged in the full spectrum of the industry. Not only did he develop and manufacture receivers, he also conducted broadcasting experiments, published papers on transmission and reception, ran a television network, and produced programming. After graduating from Rensselaer Polytechnic Institute in 1924 with a degree in electrical engineering, DuMont worked in a plant that manufactured vacuum tubes. Four years later, he joined the De Forest Radio Co. as chief engineer. With Lee de Forest, DuMont helped design an experimental mechanical television station, but he was unconvinced by the technology and advocated for all-electric TV for its crisper image. RELATED: In 1926, TV Was Mechanical When the Radio Corporation of America acquired De Forest Radio in 1931, DuMont started his own laboratory in his basement, where he worked on improving cathode ray tubes. In 1932 he invented the “magic eye,” a vacuum tube that was a visual tuning aid in radio receivers. He sold the rights to RCA. In 1935, DuMont moved the operation to a former pickle factory in Passaic, N.J., and incorporated it as the Allen B. DuMont Laboratories. The company produced cathode ray oscilloscopes, which helped finance his experiments with television. He debuted the all-electronic DuMont 180 TV set in June 1938. It cost US $395, or almost $9,000 today—so not exactly an everyday purchase for most people. Although DuMont was quick to market, RCA and the Television Corp. of America were right on his tail. RELATED: RCA’s Lucite Phantom Teleceiver Introduced the Idea of TV Of course, if companies were going to sell televisions, consumers had to have programs to watch. So in 1939, DuMont launched his own television network, starting with station W2XWV, broadcasting from Passaic. The Federal Communications Commission licensed W2XWV as an experimental station for television research. DuMont received a commercial license and changed its call sign to WABD on 2 May 1944, three years after NBC’s and CBS’s commercial stations went into operation in New York City. Due to wartime restrictions and debates over industry standards, television remained mostly experimental during World War II. As of September 1944, there were only six stations operating—three in New York City and one each in Chicago, Los Angeles, and Philadelphia. There were approximately 7,000 TV sets in personal use. The DuMont Television Network’s variety show hosted by Jackie Gleason [left, hands raised] featured a recurring skit that later gave rise to “The Honeymooners.”Left: CBS/Getty Images; Right: Garry Winogrand/Picture Post/Hulton Archive/Getty Images While other networks focused on sports, movies, or remote broadcasts, the DuMont Television Network made its mark with live studio broadcasts. In April 1946, WABD moved its studios to the Wanamaker Department Store in Manhattan. DuMont converted the 14,200-cubic-meter (500,000-cubic-foot) auditorium into the world’s largest television studio. The network’s notable programming included “The Original Amateur Hour,” which started as a radio program; “The Johns Hopkins Science Review,” which had a surprisingly progressive take on women’s health; “Life Is Worth Living,” a devotional show hosted by Catholic Bishop Fulton Sheen (that garnered DuMont’s only Emmy Award); “Cavalcade of Stars,” a variety show hosted by Jackie Gleason that birthed “The Honeymooners”; and “Captain Video and His Video Rangers,” a children’s science fiction series, the first of its genre. My grandmother, who loved ballroom dancing, was a big fan of “The Arthur Murray Party,” a dance show hosted by Arthur’s wife, Kathryn; my mom fondly recalls Kathryn’s twirling skirts. While NBC, CBS, and the other major television players built their TV networks on their existing radio networks, DuMont was starting fresh. To raise capital for his broadcast station, he sold a half-interest in his company to Paramount Pictures in 1938. The partnership was contentious from the start. There were disputes over money, the direction of the venture, and stock. But perhaps the biggest conflict was when Paramount and some of its subsidiaries began applying for FCC licenses in the same markets as Dumont’s. This ate into the DuMont network’s advertising and revenue and its plans to expand. In August 1955, Paramount gained full control over the DuMont network and proceeded to shut it down. DuMont continued to manufacture television receivers until 1958, when he sold the business to the Emerson Radio & Phonograph Corp. Two years later, the remainder of DuMont Labs merged with the Fairchild Camera and Instrument Corp. (whose founder, Sherman Fairchild, had in 1957 helped a group of ambitious young scientists and engineers known as the “Traitorous Eight” set up Fairchild Semiconductor). Allen DuMont served as general manager of the DuMont division for a year and then became a technical consultant to Fairchild. He died in 1965. One Thing Allen DuMont Missed My family eventually got a second and then a third television, but my dad always had priority. He watched the biggest set from his recliner in the family room, while my mom made do with the smaller sets in the kitchen and bedroom. He was relaxing, while she was usually doing chores. As a family, we would watch different shows in separate places. An ad for the DuMont Duoscopic touted it as a device for household harmony: “While HE sees and hears the fights, SHE sees and hears her play.” National Museum of American History/Smithsonian These days, with so many screens on so many devices and so many programming options, we may have finally achieved DuMont’s vision of separate but together. While I was writing this piece, my friend was watching the French Open on the main TV, muted so she didn’t disturb me. She streamed the same channel on her tablet and routed the audio to her headset. We both worked on our respective laptops and procrastinated by checking messages on our phones. But there’s one aspect of human nature that DuMont’s prototypes and promotional materials failed to address—that moment when someone sees something so exciting that they just have to share it. Sarah and I were barely getting any work done in this separate-but-together setting because we kept interrupting each other with questions, comments, and the occasional tennis update. We’ve been friends too long; we can’t help but chitchat. The only way for me to actually finish this article will be to go to a room by myself with no other screens or people to distract me. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the July 2025 print issue as “The 2-in-1 TV.” References I first learned about the Duoscopic in a short article in the March 1954 issue of Electrical Engineering, a precursor publication to Spectrum. My online research turned up several brochures and newspaper articles from the Early Television Museum, which surprisingly led me to the dual-screen DuMont at the South Carolina State Museum in my hometown of Columbia, S.C. Museum objects are primary sources, and I was fortunate to be able to visit this amazing artifact and examine it with Director of Collections Robyn Thiesbrummel. I also consulted the museum’s accession file, which gave additional information about the receiver from the time of acquisition. I took a look at Gary Newton Hess’s 1960 dissertation, An Historical Study of the Du Mont Television Network, as well as several of Allen B. DuMont’s papers published in the Proceedings of the IRE and Electrical Engineering.

a month ago 35 votes
How the Rubin Observatory Will Reinvent Astronomy

Night is falling on Cerro Pachón. Stray clouds reflect the last few rays of golden light as the sun dips below the horizon. I focus my camera across the summit to the westernmost peak of the mountain. Silhouetted within a dying blaze of red and orange light looms the sphinxlike shape of the Vera C. Rubin Observatory. “Not bad,” says William O’Mullane, the observatory’s deputy project manager, amateur photographer, and master of understatement. We watch as the sky fades through reds and purples to a deep, velvety black. It’s my first night in Chile. For O’Mullane, and hundreds of other astronomers and engineers, it’s the culmination of years of work, as the Rubin Observatory is finally ready to go “on sky.” Rubin is unlike any telescope ever built. Its exceptionally wide field of view, extreme speed, and massive digital camera will soon begin the 10-year Legacy Survey of Space and Time (LSST) across the entire southern sky. The result will be a high-resolution movie of how our solar system, galaxy, and universe change over time, along with hundreds of petabytes of data representing billions of celestial objects that have never been seen before. Stars begin to appear overhead, and O’Mullane and I pack up our cameras. It’s astronomical twilight, and after nearly 30 years, it’s time for Rubin to get to work. On 23 June, the Vera C. Rubin Observatory released the first batch of images to the public. One of them, shown here, features a small section of the Virgo cluster of galaxies. Visible are two prominent spiral galaxies (lower right), three merging galaxies (upper right), several groups of distant galaxies, and many stars in the Milky Way galaxy. Created from over 10 hours of observing data, this image represents less than 2 percent of the field of view of a single Rubin image. NSF-DOE Rubin Observatory A second image reveals clouds of gas and dust in the Trifid and Lagoon nebulae, located several thousand light-years from Earth. It combines 678 images taken by the Rubin Observatory over just seven hours, revealing faint details—like nebular gas and dust—that would otherwise be invisible. NSF-DOE Rubin Observatory Engineering the Simonyi Survey Telescope The top of Cerro Pachón is not a big place. Spanning about 1.5 kilometers at 2,647 meters of elevation, its three peaks are home to the Southern Astrophysical Research Telescope (SOAR), the Gemini South Telescope, and for the last decade, the Vera Rubin Observatory construction site. An hour’s flight north of the Chilean capital of Santiago, these foothills of the Andes offer uniquely stable weather. The Humboldt Current flows just offshore, cooling the surface temperature of the Pacific Ocean enough to minimize atmospheric moisture, resulting in some of the best “seeing,” as astronomers put it, in the world. It’s a complicated but exciting time to be visiting. It’s mid-April of 2025, and I’ve arrived just a few days before “first photon,” when light from the night sky will travel through the completed telescope and into its camera for the first time. In the control room on the second floor, engineers and astronomers make plans for the evening’s tests. O’Mullane and I head up into a high bay that contains the silvering chamber for the telescope’s mirrors and a clean room for the camera and its filters. Increasingly exhausting flights of stairs lead to the massive pier on which the telescope sits, and then up again into the dome. I suddenly feel very, very small. The Simonyi Survey Telescope towers above us—350 tonnes of steel and glass, nestled within the 30-meter-wide, 650-tonne dome. One final flight of stairs and we’re standing on the telescope platform. In its parked position, the telescope is pointed at horizon, meaning that it’s looking straight at me as I step in front of it and peer inside. The telescope’s enormous 8.4-meter primary mirror is so flawlessly reflective that it’s essentially invisible. Made of a single piece of low-expansion borosilicate glass covered in a 120-nanometer-thick layer of pure silver, the huge mirror acts as two different mirrors, with a more pronounced curvature toward the center. Standing this close means that different reflections of the mirrors, the camera, and the structure of the telescope all clash with one another in a way that shifts every time I move. I feel like if I can somehow look at it in just the right way, it will all make sense. But I can’t, and it doesn’t. I’m rescued from madness by O’Mullane snapping photos next to me. “Why?” I ask him. “You see this every day, right?” “This has never been seen before,” he tells me. “It’s the first time, ever, that the lens cover has been off the camera since it’s been on the telescope.” Indeed, deep inside the nested reflections I can see a blue circle, the r-band filter within the camera itself. As of today, it’s ready to capture the universe. Rubin’s Wide View Unveils the Universe Back down in the control room, I find director of construction Željko Ivezić. He’s just come up from the summit hotel, which has several dozen rooms for lucky visitors like myself, plus a few even luckier staff members. The rest of the staff commutes daily from the coastal town of La Serena, a 4-hour round trip. To me, the summit hotel seems luxurious for lodgings at the top of a remote mountain. But Ivezić has a slightly different perspective. “The European-funded telescopes,” he grumbles, “have swimming pools at their hotels. And they serve wine with lunch! Up here, there’s no alcohol. It’s an American thing.” He’s referring to the fact that Rubin is primarily funded by the U.S. National Science Foundation and the U.S. Department of Energy’s Office of Science, which have strict safety requirements. Originally, Rubin was intended to be a dark-matter survey telescope, to search for the 85 percent of the mass of the universe that we know exists but can’t identify. In the 1970s, astronomer Vera C. Rubin pioneered a spectroscopic method to measure the speed at which stars orbit around the centers of their galaxies, revealing motion that could be explained only by the presence of a halo of invisible mass at least five times the apparent mass of the galaxies themselves. Dark matter can warp the space around it enough that galaxies act as lenses, bending light from even more distant galaxies as it passes around them. It’s this gravitational lensing that the Rubin observatory was designed to detect on a massive scale. But once astronomers considered what else might be possible with a survey telescope that combined enormous light-collecting ability with a wide field of view, Rubin’s science mission rapidly expanded beyond dark matter. Trading the ability to focus on individual objects for a wide field of view that can see tens of thousands of objects at once provides a critical perspective for understanding our universe, says Ivezić. Rubin will complement other observatories like the Hubble Space Telescope and the James Webb Space Telescope. Hubble’s Wide Field Camera 3 and Webb’s Near Infrared Camera have fields of view of less than 0.05 square degrees each, equivalent to just a few percent of the size of a full moon. The upcoming Nancy Grace Roman Space Telescope will see a bit more, with a field of view of about one full moon. Rubin, by contrast, can image 9.6 square degrees at a time—about 45 full moons’ worth of sky. RELATED: A Trillion Rogue Planets and Not One Sun to Shine on Them That ultrawide view offers essential context, Ivezić explains. “My wife is American, but I’m from Croatia,” he says. “Whenever we go to Croatia, she meets many people. I asked her, ‘Did you learn more about Croatia by meeting many people very superficially, or because you know me very well?’ And she said, ‘You need both. I learn a lot from you, but you could be a weirdo, so I need a control sample.’ ” Rubin is providing that control sample, so that astronomers know just how weird whatever they’re looking at in more detail might be. Every night, the telescope will take a thousand images, one every 34 seconds. After three or four nights, it’ll have the entire southern sky covered, and then it’ll start all over again. After a decade, Rubin will have taken more than 2 million images, generated 500 petabytes of data, and visited every object it can see at least 825 times. In addition to identifying an estimated 6 million bodies in our solar system, 17 billion stars in our galaxy, and 20 billion galaxies in our universe, Rubin’s rapid cadence means that it will be able to delve into the time domain, tracking how the entire southern sky changes on an almost daily basis. Cutting-Edge Technology Behind Rubin’s Speed Achieving these science goals meant pushing the technical envelope on nearly every aspect of the observatory. But what drove most of the design decisions is the speed at which Rubin needs to move (3.5 degrees per second)—the phrase most commonly used by the Rubin staff is “crazy fast.” Crazy fast movement is why the telescope looks the way it does. The squat arrangement of the mirrors and camera centralizes as much mass as possible. Rubin’s oversize supporting pier is mostly steel rather than mostly concrete so that the movement of the telescope doesn’t twist the entire pier. And then there’s the megawatt of power required to drive this whole thing, which comes from huge banks of capacitors slung under the telescope to prevent a brownout on the summit every 30 seconds all night long. Rubin is also unique in that it utilizes the largest digital camera ever built. The size of a small car and weighing 2,800 kilograms, the LSST camera captures 3.2-gigapixel images through six swappable color filters ranging from near infrared to near ultraviolet. The camera’s focal plane consists of 189 4K-by-4K charge-coupled devices grouped into 21 “rafts.” Every CCD is backed by 16 amplifiers that each read 1 million pixels, bringing the readout time for the entire sensor down to 2 seconds flat. Astronomy in the Time Domain As humans with tiny eyeballs and short lifespans who are more or less stranded on Earth, we have only the faintest idea of how dynamic our universe is. To us, the night sky seems mostly static and also mostly empty. This is emphatically not the case. In 1995, the Hubble Space Telescope pointed at a small and deliberately unremarkable part of the sky for a cumulative six days. The resulting image, called the Hubble Deep Field, revealed about 3,000 distant galaxies in an area that represented just one twenty-four-millionth of the sky. To observatories like Hubble, and now Rubin, the sky is crammed full of so many objects that it becomes a problem. As O’Mullane puts it, “There’s almost nothing not touching something.” One of Rubin’s biggest challenges will be deblending—­identifying and then separating things like stars and galaxies that appear to overlap. This has to be done carefully by using images taken through different filters to estimate how much of the brightness of a given pixel comes from each object. At first, Rubin won’t have this problem. At each location, the camera will capture one 30-second exposure before moving on. As Rubin returns to each location every three or four days, subsequent exposures will be combined in a process called coadding. In a coadded image, each pixel represents all of the data collected from that location in every previous image, which results in a much longer effective exposure time. The camera may record only a few photons from a distant galaxy in each individual image, but a few photons per image added together over 825 images yields much richer data. By the end of Rubin’s 10-year survey, the coadding process will generate images with as much detail as a typical Hubble image, but over the entire southern sky. A few lucky areas called “deep drilling fields” will receive even more attention, with each one getting a staggering 23,000 images or more. Rubin will add every object that it detects to its catalog, and over time, the catalog will provide a baseline of the night sky, which the observatory can then use to identify changes. Some of these changes will be movement—Rubin may see an object in one place, and then spot it in a different place some time later, which is how objects like near-Earth asteroids will be detected. But the vast majority of the changes will be in brightness rather than movement. RELATED: Three Steps to Stopping Killer Asteroids Every image that Rubin collects will be compared with a baseline image, and any change will automatically generate a software alert within 60 seconds of when the image was taken. Rubin’s wide field of view means that there will be a lot of these alerts—on the order of 10,000 per image, or 10 million alerts per night. Other automated systems will manage the alerts. Called alert brokers, they ingest the alert streams and filter them for the scientific community. If you’re an astronomer interested in Type Ia supernovae, for example, you can subscribe to an alert broker and set up a filter so that you’ll get notified when Rubin spots one. Many of these alerts will be triggered by variable stars, which cyclically change in brightness. Rubin is also expected to identify somewhere between 3 million and 4 million supernovae—that works out to over a thousand new supernovae for every night of observing. And the rest of the alerts? Nobody knows for sure, and that’s why the alerts have to go out so quickly, so that other telescopes can react to make deeper observations of what Rubin finds. Managing Rubin’s Vast Data Output After the data leaves Rubin’s camera, most of the processing will take place at the SLAC National Accelerator Laboratory in Menlo Park, Calif., over 9,000 kilometers from Cerro Pachón. It takes less than 10 seconds for an image to travel from the focal plane of the camera to SLAC, thanks to a 600-gigabit fiber connection from the summit to La Serena, and from there, a dedicated 100-gigabit line and a backup 40-gigabit line that connect to the Department of Energy’s science network in the United States. The 20 terabytes of data that Rubin will produce nightly makes this bandwidth necessary. “There’s a new image every 34 seconds,” O’Mullane tells me. “If I can’t deal with it fast enough, I start to get behind. So everything has to happen on the cadence of half a minute if I want to keep up with the data flow.” At SLAC, each image will be calibrated and cleaned up, including the removal of satellite trails. Rubin will see a lot of satellites, but since the satellites are unlikely to appear in the same place in every image, the impact on the data is expected to be minimal when the images are coadded. The processed image is compared with a baseline image and any alerts are sent out, by which time processing of the next image has already begun. As Rubin’s catalog of objects grows, astronomers will be able to query it in all kinds of useful ways. Want every image of a particular patch of sky? No problem. All the galaxies of a certain shape? A little trickier, but sure. Looking for 10,000 objects that are similar in some dimension to 10,000 other objects? That might take a while, but it’s still possible. Astronomers can even run their own code on the raw data. “Pretty much everyone in the astronomy community wants something from Rubin,” O’Mullane explains, “and so they want to make sure that we’re treating the data the right way. All of our code is public. It’s on GitHub. You can see what we’re doing, and if you’ve got a better solution, we’ll take it.” One better solution may involve AI. “I think as a community we’re struggling with how we do this,” says O’Mullane. “But it’s probably something we ought to do—curating the data in such a way that it’s consumable by machine learning, providing foundation models, that sort of thing.” The data management system is arguably as much of a critical component of the Rubin observatory as the telescope itself. While most telescopes make targeted observations that get distributed to only a few astronomers at a time, Rubin will make its data available to everyone within just a few days, which is a completely different way of doing astronomy. “We’ve essentially promised that we will take every image of everything that everyone has ever wanted to see,” explains Kevin Reil, Rubin observatory scientist. “If there’s data to be collected, we will try to collect it. And if you’re an astronomer somewhere, and you want an image of something, within three or four days we’ll give you one. It’s a colossal challenge to deliver something on this scale.” The more time I spend on the summit, the more I start to think that the science that we know Rubin will accomplish may be the least interesting part of its mission. And despite their best efforts, I get the sense that everyone I talk to is wildly understating the impact it will have on astronomy. The sheer volume of objects, the time domain, the 10 years of coadded data—what new science will all of that reveal? Astronomers have no idea, because we’ve never looked at the universe in this way before. To me, that’s the most fascinating part of what’s about to happen. Reil agrees. “You’ve been here,” he says. “You’ve seen what we’re doing. It’s a paradigm shift, a whole new way of doing things. It’s still a telescope and a camera, but we’re changing the world of astronomy. I don’t know how to capture—I mean, it’s the people, the intensity, the awesomeness of it. I want the world to understand the beauty of it all.” The Intersection of Science and Engineering Because nobody has built an observatory like Rubin before, there are a lot of things that aren’t working exactly as they should, and a few things that aren’t working at all. The most obvious of these is the dome. The capacitors that drive it blew a fuse the day before I arrived, and the electricians are off the summit for the weekend. The dome shutter can’t open either. Everyone I talk to takes this sort of thing in stride—they have to, because they’ve been troubleshooting issues like these for years. I sit down with Yousuke Utsumi, a camera operations scientist who exudes the mixture of excitement and exhaustion that I’m getting used to seeing in the younger staff. “Today is amazingly quiet,” he tells me. “I’m happy about that. But I’m also really tired. I just want to sleep.” Just yesterday, Utsumi says, they managed to finally solve a problem that the camera team had been struggling with for weeks—an intermittent fault in the camera cooling system that only seemed to happen when the telescope was moving. This was potentially a very serious problem, and Utsumi’s phone would alert him every time the fault occurred, over and over again in the middle of the night. The fault was finally traced to a cable within the telescope’s structure that used pins that were slightly too small, leading to a loose connection. Utsumi’s contract started in 2017 and was supposed to last three years, but he’s still here. “I wanted to see first photon,” he says. “I’m an astronomer. I’ve been working on this camera so that it can observe the universe. And I want to see that light, from those photons from distant galaxies.” This is something I’ve also been thinking about—those lonely photons traveling through space for billions of years, and within the coming days, a lucky few of them will land on the sensors Utsumi has been tending, and we’ll get to see them. He nods, smiling. “I don’t want to lose one, you know?” Rubin’s commissioning scientists have a unique role, working at the intersection of science and engineering to turn a bunch of custom parts into a functioning science instrument. Commissioning scientist Marina Pavlovic is a postdoc from Serbia with a background in the formation of supermassive black holes created by merging galaxies. “I came here last year as a volunteer,” she tells me. “My plan was to stay for three months, and 11 months later I’m a commissioning scientist. It’s crazy!” Pavlovic’s job is to help diagnose and troubleshoot whatever isn’t working quite right. And since most things aren’t working quite right, she’s been very busy. “I love when things need to be fixed because I am learning about the system more and more every time there’s a problem—every day is a new experience here.” I ask her what she’ll do next, once Rubin is up and running. “If you love commissioning instruments, that is something that you can do for the rest of your life, because there are always going to be new instruments,” she says. Before that happens, though, Pavlovic has to survive the next few weeks of going on sky. “It’s going to be so emotional. It’s going to be the beginning of a new era in astronomy, and knowing that you did it, that you made it happen, at least a tiny percent of it, that will be a priceless moment.” “I had to learn how to calm down to do this job,” she admits, “because sometimes I get too excited about things and I cannot sleep after that. But it’s okay. I started doing yoga, and it’s working.” From First Photon to First Light My stay on the summit comes to an end on 14 April, just a day before first photon, so as soon as I get home I check in with some of the engineers and astronomers that I met to see how things went. Guillem Megias Homar manages the adaptive optics system—232 actuators that flex the surfaces of the telescope’s three mirrors a few micrometers at a time to bring the image into perfect focus. Currently working on his Ph.D., he was born in 1997, one year after the Rubin project started. First photon, for him, went like this: “I was in the control room, sitting next to the camera team. We have a microphone on the camera, so that we can hear when the shutter is moving. And we hear the first click. And then all of a sudden, the image shows up on the screens in the control room, and it was just an explosion of emotions. All that we have been fighting for is finally a reality. We are on sky!” There were toasts (with sparkling apple juice, of course), and enough speeches that Megias Homar started to get impatient: “I was like, when can we start working? But it was only an hour, and then everything became much more quiet.” Another newly released image showing a small section of the Rubin Observatory’s total view of the Virgo cluster of galaxies. Visible are bright stars in the Milky Way galaxy shining in the foreground, and many distant galaxies in the background. NSF-DOE Rubin Observatory “It was satisfying to see that everything that we’d been building was finally working,” Victor Krabbendam, project manager for Rubin construction, tells me a few weeks later. “But some of us have been at this for so long that first photon became just one of many firsts.” Krabbendam has been with the observatory full-time for the last 21 years. “And the very moment you succeed with one thing, it’s time to be doing the next thing.” Since first photon, Rubin has been undergoing calibrations, collecting data for the first images that it’s now sharing with the world, and preparing to scale up to begin its survey. Operations will soon become routine, the commissioning scientists will move on, and eventually, Rubin will largely run itself, with just a few people at the observatory most nights. But for astronomers, the next 10 years will be anything but routine. “It’s going to be wildly different,” says Krabbendam. “Rubin will feed generations of scientists with trillions of data points of billions of objects. Explore the data. Harvest it. Develop your idea, see if it’s there. It’s going to be phenomenal.” Listen to a Conversation About the Rubin Observatory As part of an experiment with AI storytelling tools, author Evan Ackerman—who visited the Vera C. Rubin Observatory in Chile for four days this past April—fed over 14 hours of raw audio from his interviews and other reporting notes into NotebookLM, an AI-powered research assistant developed by Google. The result is a podcast-style audio experience that you can listen to here. While the script and voices are AI-generated, the conversation is grounded in Ackerman’s original reporting, and includes many details that did not appear in the article above. Ackerman reviewed and edited the audio to ensure accuracy, and there are minor corrections in the transcript. Let us know what you think of this experiment in AI narration. Your browser does not support the audio tag. See transcript 0:01: Today we’re taking a deep dive into the engineering marvel that is the Vera C. Rubin Observatory. 0:06: And and it really is a marvel. 0:08: This project pushes the limits, you know, not just for the science itself, like mapping the Milky Way or exploring dark energy, which is amazing, obviously. 0:16: But it’s also pushing the limits in just building the tools, the technical ingenuity, the, the sheer human collaboration needed to make something this complex actually work. 0:28: That’s what’s really fascinating to me. 0:29: Exactly. 0:30: And our mission for this deep dive is to go beyond the headlines, isn’t it? 0:33: We want to uncover those specific Kind of hidden technical details, the stuff from the audio interviews, the internal docs that really define this observatory. 0:41: The clever engineering solutions. 0:43: Yeah, the nuts and bolts, the answers to challenges nobody’s faced before, stuff that anyone who appreciates, you know, complex systems engineering would find really interesting. 0:53: Definitely. 0:54: So let’s start right at the heart of it. 0:57: The Simonyi survey telescope itself. 1:00: It’s this 350 ton machine inside a 600 ton dome, 30 m wide, huge. [The dome is closer to 650 tons.] 1:07: But the really astonishing part is its speed, speed and precision. 1:11: How do you even engineer something that massive to move that quickly while keeping everything stable down to the submicron level? [Micron level is more accurate.] 1:18: Well, that’s, that’s the core challenge, right? 1:20: This telescope, it can hit a top speed of 3.5 degrees per second. 1:24: Wow. 1:24: Yeah, and it can, you know, move to basically any point in the sky. 1:28: In under 20 seconds, 20 seconds, which makes it by far the fastest moving large telescope ever built, and the dome has to keep up. 1:36: So it’s also the fastest moving dome. 1:38: So the whole building is essentially racing along with the telescope. 1:41: Exactly. 1:41: And achieving that meant pretty much every component had to be custom designed like the pier holding the telescope up. 1:47: It’s mostly steel, not concrete. 1:49: Oh, interesting. 1:50: Why steel? 1:51: Specifically to stop it from twisting or vibrating when the telescope makes those incredibly fast moves. 1:56: Concrete just wouldn’t handle the torque the same way. [The pier is more steel than concrete, but it's still substantially concrete.] 1:59: OK, that makes sense. 1:59: And the power needed to accelerate and decelerate, you know, 300 tons, that must be absolutely massive. 2:06: Oh. 2:06: The instantaneous draw would be enormous. 2:09: How did they manage that without like dimming the lights on the whole. 2:12: Mountaintop every 30 seconds. 2:14: Yeah, that was a real concern, constant brownouts. 2:17: The solution was actually pretty elegant, involving these onboard capacitor banks. 2:22: Yep, slung right underneath the telescope structure. 2:24: They can slowly sip power from the grid, store it up over time, and then bam, discharge it really quickly for those big acceleration surges. 2:32: like a giant camera flash, but for moving a telescope, of yeah. 2:36: It smooths out the demand, preventing those grid disruptions. 2:40: Very clever engineering. 2:41: And beyond the movement, the mirrors themselves, equally critical, equally impressive, I imagine. 2:47: How did they tackle designing and making optics that large and precise? 2:51: Right, so the main mirror, the primary mirror, M1M3. 2:55: It’s a single piece of glass, 8.4 m across, low expansion borosilicate glass. 3:01: And that 8.4 m size, was that just like the biggest they could manage? 3:05: Well, it was a really crucial early decision. 3:07: The science absolutely required something at least 7 or 8 m wide. 3:13: But going much bigger, say 10 or 12 m, the logistics became almost impossible. 3:19: The big one was transport. 3:21: There’s a tunnel on the mountain road up to the summit, and a mirror, much larger than 8.4 m, physically wouldn’t fit through it. 3:28: No way. 3:29: So the tunnel actually set an upper limit on the mirror size. 3:31: Pretty much, yeah. 3:32: Building new road or some other complex transport method. 3:36: It would have added enormous cost and complexity. 3:38: So 8.4 m was that sweet spot between scientific need. 3:42: And, well, physical reality. 3:43: Wow, a real world constraint driving fundamental design. 3:47: And the mirror itself, you said M1 M3, it’s not just one simple mirror surface. 3:52: Correct. 3:52: It’s technically two mirror surfaces ground into that single piece of glass. 3:57: The central part has a more pronounced curvature. 3:59: It’s M1 and M3 combined. 4:00: OK, so fabricating that must have been tricky, especially with what, 10 tons of glass just in the center. 4:07: Oh, absolutely novel and complicated. 4:09: And these mirrors, they don’t support their own weight rigidly. 4:12: So just handling them during manufacturing, polishing, even getting them out of the casting mold, was a huge engineering challenge. 4:18: You can’t just lift it like a dinner plate. 4:20: Not quite, and then there’s maintaining it, re-silvering. 4:24: They hope to do it every 5 years. 4:26: Well, traditionally, big mirrors like this often need it more, like every 1.5 to 2 years, and it’s a risky weeks-long job. 4:34: You have to unbolt this priceless, unique piece of equipment, move it. 4:39: It’s nerve-wracking. 4:40: I bet. 4:40: And the silver coating itself is tiny, right? 4:42: Incredibly thin, just a few nanometers of pure silver. 4:46: It takes about 24 g for the whole giant surface, bonded with the adhesive layers that are measured in Angstroms. [It's closer to 26 grams of silver.] 4:52: It’s amazing precision. 4:54: So tying this together, you have this fast moving telescope, massive mirrors. 4:59: How do they keep everything perfectly focused, especially with multiple optical elements moving relative to each other? 5:04: that’s where these things called hexapods come in. 5:08: Really crucial bits of kit. 5:09: Hexapods, like six feet? 5:12: Sort of. 5:13: They’re mechanical systems with 6 adjustable arms or struts. 5:17: A simpler telescope might just have one maybe on the camera for basic focusing, but Ruben needs more because it’s got the 3 mirrors plus the camera. 5:25: Exactly. 5:26: So there’s a hexapod mounted on the secondary mirror, M2. 5:29: Its job is to keep M2 perfectly positioned relative to M1 and M3, compensating for tiny shifts or flexures. 5:36: And then there’s another hexapod on the camera itself. 5:39: That one adjusts the position and tilt of the entire camera’s sensor plane, the focal plane. 5:43: To get that perfect focus across the whole field of view. 5:46: And these hexapods move in 6 ways. 5:48: Yep, 6 degrees of freedom. 5:50: They can adjust position along the X, Y, and Z axis, and they can adjust rotation or tilt around those 3 axes as well. 5:57: It allows for incredibly fine adjustments, microp precision stuff. 6:00: So they’re constantly making these tiny tweaks as the telescope moves. 6:04: Constantly. 6:05: The active optics system uses them. 6:07: It calculates the needed corrections based on reference stars in the images, figures out how the mirror might be slightly bending. 6:13: And then tells the hexapods how to compensate. 6:15: It’s controlling like 26 g of silver coating on the mirror surface down to micron precision, using the mirror’s own natural bending modes. 6:24: It’s pretty wild. 6:24: Incredible. 6:25: OK, let’s pivot to the camera itself. 6:28: The LSST camera. 6:29: Big digital camera ever built, right? 6:31: Size of a small car, 2800 kg, captures 3.2 gigapixel images, just staggering numbers. 6:38: They really are, and the engineering inside is just as staggering. 6:41: That Socal plane where the light actually hits. 6:43: It’s made up of 189 individual CCD sensors. 6:47: Yep, 4K by 4K CCDs grouped into 21 rafts. 6:50: They give them like tiles, and each CCD has 16 amplifiers reading it out. 6:54: Why so many amplifiers? 6:56: Speed. 6:56: Each amplifier reads out about a million pixels. 6:59: By dividing the job up like that, they can read out the entire 3.2 gigapixel sensor in just 2 seconds. 7:04: 2 seconds for that much data. 7:05: Wow. 7:06: It’s essential for the survey’s rapid cadence. 7:09: Getting all those 189 CCDs perfectly flat must have been, I mean, are they delicate? 7:15: Unbelievably delicate. 7:16: They’re silicon wafers only 100 microns thick. 7:18: How thick is that really? 7:19: about the thickness of a human hair. 7:22: You could literally break one by breathing on it wrong, apparently, seriously, yeah. 7:26: And the challenge was aligning all 189 of them across this 650 millimeter wide focal plane, so the entire surface is flat. 7:34: To within just 24 microns, peak to valley. 7:37: 24 microns. 7:39: That sounds impossibly flat. 7:40: It’s like, imagine the entire United States. 7:43: Now imagine the difference between the lowest point and the highest point across the whole country was only 100 ft. 7:49: That’s the kind of relative flatness they achieved on the camera sensor. 7:52: OK, that puts it in perspective. 7:53: And why is that level of flatness so critical? 7:56: Because the telescope focuses light. 7:58: terribly. 7:58: It’s an F1.2 system, which means it has a very shallow depth of field. 8:02: If the sensors aren’t perfectly in that focal plane, even by a few microns, parts of the image go out of focus. 8:08: Gotcha. 8:08: And the pixels themselves, the little light buckets on the CCDs, are they special? 8:14: They’re custom made, definitely. 8:16: They settled on 10 micron pixels. 8:18: They figured anything smaller wouldn’t actually give them more useful scientific information. 8:23: Because you start hitting the limits of what the atmosphere and the telescope optics themselves can resolve. 8:28: So 10 microns was the optimal size, right? 8:31: balancing sensor tech with physical limits. 8:33: Now, keeping something that sensitive cool, that sounds like a nightmare, especially with all those electronics. 8:39: Oh, it’s a huge thermal engineering challenge. 8:42: The camera actually has 3 different cooling zones, 3 distinct temperature levels inside. 8:46: 3. 8:47: OK. 8:47: First, the CCDs themselves. 8:49: They need to be incredibly cold to minimize noise. 8:51: They operate at -125 °C. 8:54: -125C, how do they manage that? 8:57: With a special evaporator plate connected to the CCD rafts by flexible copper braids, which pulls heat away very effectively. 9:04: Then you’ve got the cameras, electronics, the readout boards and stuff. 9:07: They run cooler than room temp, but not that cold, around -50 °C. 9:12: OK. 9:12: That requires a separate liquid cooling loop delivered through these special vacuum insulated tubes to prevent heat leaks. 9:18: And the third zone. 9:19: That’s for the electronics in the utility trunk at the back of the camera. 9:23: They generate a fair bit of heat, about 3000 watts, like a few hair dryers running constantly. 9:27: Exactly. 9:28: So there’s a third liquid cooling system just for them, keeping them just slightly below the ambient room temperature in the dome. 9:35: And all this cooling, it’s not just to keep the parts from overheating, right? 9:39: It affects the images, absolutely critical for image quality. 9:44: If the outer surface of the camera body itself is even slightly warmer or cooler than the air inside the dome, it creates tiny air currents, turbulence right near the light path. 9:57: And that shows up as little wavy distortions in the images, messing up the precision. 10:02: So even the outside temperature of the camera matters. 10:04: Yep, it’s not just a camera. 10:06: They even have to monitor the heat generated by the motors that move the massive dome, because that heat could potentially cause enough air turbulence inside the dome to affect the image quality too. 10:16: That’s incredible attention to detail, and the camera interior is a vacuum you mentioned. 10:21: Yes, a very strong vacuum. 10:23: They pump it down about once a year, first using turbopumps spinning at like 80,000 RPM to get it down to about 102 tor. 10:32: Then they use other methods to get it down much further. 10:34: The 107 tor, that’s an ultra high vacuum. 10:37: Why the vacuum? 10:37: Keep frost off the cold part. 10:39: Exactly. 10:40: Prevents condensation and frost on those negatives when it 25 degree CCDs and generally ensures everything works optimally. 10:47: For normal operation, day to day, they use something called an ion pump. 10:51: How does that work? 10:52: It basically uses a strong electric field to ionize any stray gas molecules, mostly hydrogen, and trap them, effectively removing them from the vacuum space, very efficient for maintaining that ultra-high vacuum. 11:04: OK, so we have this incredible camera taking these massive images every few seconds. 11:08: Once those photons hit the CCDs and become digital signals, What happens next? 11:12: How does Ruben handle this absolute flood of data? 11:15: Yeah, this is where Ruben becomes, you know, almost as much a data processing machine as a telescope. 11:20: It’s designed for the data output. 11:22: So photons hit the CCDs, get converted to electrical signals. 11:27: Then, interestingly, they get converted back into light signals, photonic signals back to light. 11:32: Why? 11:33: To send them over fiber optics. 11:34: They’re about 6 kilometers of fiber optic cable running through the observatory building. 11:39: These signals go to FPGA boards, field programmable gate arrays in the data acquisition system. 11:46: OK. 11:46: And those FPGAs are basically assembling the complete image data packages from all the different CCDs and amplifiers. 11:53: That sounds like a fire hose of data leaving the camera. 11:56: How does it get off the mountain and where does it need to go? 11:58: And what about all the like operational data, temperatures, positions? 12:02: Good question. 12:03: There are really two main data streams all that telemetry you mentioned, sensor readings, temperatures, actuator positions, command set, everything about the state of the observatory that all gets collected into something called the Engineering facility database or EFD. 12:16: They use Kafka for transmitting that data. 12:18: It’s good for high volume streams, and store it in an influx database, which is great for time series data like sensor readings. 12:26: And astronomers can access that. 12:28: Well, there’s actually a duplicate copy of the EFD down at SLAC, the research center in California. 12:34: So scientists and engineers can query that copy without bogging down the live system running on the mountain. 12:40: Smart. 12:41: How much data are we talking about there? 12:43: For the engineering data, it’s about 20 gigabytes per night, and they plan to keep about a year’s worth online. 12:49: OK. 12:49: And the image data, the actual science pixels. 12:52: That takes a different path. [All of the data from Rubin to SLAC travels over the same network.] 12:53: It travels over dedicated high-speed network links, part of ESET, the research network, all the way from Chile, usually via Boca Raton, Florida, then Atlanta, before finally landing at SLAC. 13:05: And how fast does that need to be? 13:07: The goal is super fast. 13:09: They aim to get every image from the telescope in Chile to the data center at SLAC within 7 seconds of the shutter closing. 13:15: 7 seconds for gigabytes of data. 13:18: Yeah. 13:18: Sometimes network traffic bumps it up to maybe 30 seconds or so, but the target is 7. 13:23: It’s crucial for the next step, which is making sense of it all. 13:27: How do astronomers actually use this, this torrent of images and data? 13:30: Right. 13:31: This really changes how astronomy might be done. 13:33: Because Ruben is designed to generate alerts, real-time notifications about changes in the sky. 13:39: Alerts like, hey, something just exploded over here. 13:42: Pretty much. 13:42: It takes an image compared to the previous images of the same patch of sky and identifies anything that’s changed, appeared, disappeared, moved, gotten brighter, or fainter. 13:53: It expects to generate about 10,000 such alerts per image. 13:57: 10,000 per image, and they take an image every every 20 seconds or so on average, including readouts. [Images are taken every 34 seconds: a 30 second exposure, and then about 4 seconds for the telescope to move and settle.] 14:03: So you’re talking around 10 million alerts every single night. 14:06: 10 million a night. 14:07: Yep. 14:08: And the goal is to get those alerts out to the world within 60 seconds of the image being taken. 14:13: That’s insane. 14:14: What’s in an alert? 14:15: It contains the object’s position, brightness, how it’s changed, and little cut out images, postage stamps in the last 12 months of observations, so astronomers can quickly see the history. 14:24: But surely not all 10 million are real astronomical events satellites, cosmic rays. 14:30: Exactly. 14:31: The observatory itself does a first pass filter, masking out known issues like satellite trails, cosmic ray hits, atmospheric effects, with what they call real bogus stuff. 14:41: OK. 14:42: Then, this filtered stream of potentially real alerts goes out to external alert brokers. 14:49: These are systems run by different scientific groups around the world. 14:52: Yeah, and what did the brokers do? 14:53: They ingest the huge stream from Ruben and apply their own filters, based on what their particular community is interested in. 15:00: So an astronomer studying supernovae can subscribe to a broker that filters just for likely supernova candidates. 15:06: Another might filter for near Earth asteroids or specific types of variable stars. 15:12: so it makes the fire hose manageable. 15:13: You subscribe to the trickle you care about. 15:15: Precisely. 15:16: It’s a way to distribute the discovery potential across the whole community. 15:19: So it’s not just raw images astronomers get, but these alerts and presumably processed data too. 15:25: Oh yes. 15:26: Rubin provides the raw images, but also fully processed images, corrected for instrument effects, calibrated called processed visit images. 15:34: And also template images, deep combinations of previous images used for comparison. 15:38: And managing all that data, 15 petabytes you mentioned, how do you query that effectively? 15:44: They use a system called Keyserve. [The system is "QServ."] 15:46: It’s a distributed relational database, custom built basically, designed to handle these enormous astronomical catalogs. 15:53: The goal is to let astronomers run complex searches across maybe 15 petabytes of catalog data and get answers back in minutes, not days or weeks. 16:02: And how do individual astronomers actually interact with it? 16:04: Do they download petabytes? 16:06: No, definitely not. 16:07: For general access, there’s a science platform, the front end of which runs on Google Cloud. 16:11: Users interact mainly through Jupiter notebooks. 16:13: Python notebooks, familiar territory for many scientists. 16:17: Exactly. 16:18: They can write arbitrary Python code, access the catalogs directly, do analysis for really heavy duty stuff like large scale batch processing. 16:27: They can submit jobs to the big compute cluster at SLEC, which sits right next to the data storage. 16:33: That’s much more efficient. 16:34: Have they tested this? 16:35: Can it handle thousands of astronomers hitting it at once? 16:38: They’ve done extensive testing, yeah, scaled it up with hundreds of users already, and they seem confident they can handle up to maybe 3000 simultaneous users without issues. 16:49: And a key point. 16:51: After an initial proprietary period for the main survey team, all the data and importantly, all the software algorithms used to process it become public. 17:00: Open source algorithms too. 17:01: Yes, the idea is, if the community can improve on their processing pipelines, they’re encouraged to contribute those solutions back. 17:08: It’s meant to be a community resource. 17:10: That open approach is fantastic, and even the way the images are presented visually has some deep thought behind it, doesn’t it? 17:15: You mentioned Robert Leptina’s perspective. 17:17: Yes, this is fascinating. 17:19: It’s about how you assign color to astronomical images, which usually combine data from different filters, like red, green, blue. 17:28: It’s not just about making pretty pictures, though they can be beautiful. 17:31: Right, it should be scientifically meaningful. 17:34: Exactly. 17:35: Lepton’s approach tries to preserve the inherent color information in the data. 17:40: Many methods saturate bright objects, making their centers just white blobs. 17:44: Yeah, you see that a lot. 17:46: His algorithm uses a different mathematical scaling, more like a logarithmic scale, that avoids this saturation. 17:52: It actually propagates the true color information back into the centers of bright stars and galaxies. 17:57: So, a galaxy that’s genuinely redder, because it’s red shifted, will actually look redder in the image, even in its bright core. 18:04: Precisely, in a scientifically meaningful way. 18:07: Even if our eyes wouldn’t perceive it quite that way directly through a telescope, the image renders the data faithfully. 18:13: It helps astronomers visually interpret the physics. 18:15: It’s a subtle but powerful detail in making the data useful. 18:19: It really is. 18:20: Beyond just taking pictures, I heard Ruben’s wide view is useful for something else entirely gravitational waves. 18:26: That’s right. 18:26: It’s a really cool synergy. 18:28: Gravitational wave detectors like Lego and Virgo, they detect ripples in space-time, often from emerging black holes or neutron stars, but they usually only narrow down the location to a relatively large patch of sky, maybe 10 square degrees or sometimes much more. 18:41: Ruben’s camera has a field of view of about 9.6 square degrees. 18:45: That’s huge for a telescope. 18:47: It almost perfectly matches the typical LIGO alert area. 18:51: so when LIGO sends an alert, Ruben can quickly scan that whole error box, maybe taking just a few pointings, looking for any new point of light. 19:00: The optical counterpart, the Killanova explosion, or whatever light accompany the gravitational wave event. 19:05: It’s a fantastic follow-up machine. 19:08: Now, stepping back a bit, this whole thing sounds like a colossal integration challenge. 19:13: A huge system of systems, many parts custom built, pushed to their limits. 19:18: What were some of those big integration hurdles, bringing it all together? 19:22: Yeah, classic system of systems is a good description. 19:25: And because nobody’s built an observatory quite like this before, a lot of the commissioning phase, getting everything working together involves figuring out the procedures as they go. 19:34: Learning by doing on a massive scale. 19:36: Pretty much. 19:37: They’re essentially, you know, teaching the system how to walk. 19:40: And there’s this constant tension, this balancing act. 19:43: Do you push forward, maybe build up some technical debt, things you know you’ll have to fix later, or do you stop and make sure every little issue is 100% perfect before moving on, especially with a huge distributed team? 19:54: I can imagine. 19:55: And you mentioned the dome motors earlier. 19:57: That discovery about heat affecting images sounds like a perfect example of unforeseen integration issues. 20:03: Exactly. 20:03: Marina Pavvich described that. 20:05: They ran the dome motors at full speed, something maybe nobody had done for extended periods in that exact configuration before, and realized, huh. 20:13: The heat these generate might actually cause enough air turbulence to mess with our image quality. 20:19: That’s the kind of thing you only find when you push the integrated system. 20:23: Lots of unexpected learning then. 20:25: What about interacting with the outside world? 20:27: Other telescopes, the atmosphere itself? 20:30: How does Ruben handle atmospheric distortion, for instance? 20:33: that’s another interesting point. 20:35: Many modern telescopes use lasers. 20:37: They shoot a laser up into the sky to create an artificial guide star, right, to measure. 20:42: Atmospheric turbulence. 20:43: Exactly. 20:44: Then they use deformable mirrors to correct for that turbulence in real time. 20:48: But Ruben cannot use a laser like that. 20:50: Why? 20:51: Because its field of view is enormous. 20:53: It sees such a wide patch of sky at once. 20:55: A single laser beam, even a pinpoint from another nearby observatory, would contaminate a huge fraction of Ruben’s image. 21:03: It would look like a giant streak across, you know, a quarter of the sky for Ruben. 21:06: Oh, wow. 21:07: OK. 21:08: Too much interference. 21:09: So how does it correct for the atmosphere? 21:11: Software. 21:12: It uses a really clever approach called forward modeling. 21:16: It looks at the shapes of hundreds of stars across its wide field of view in each image. 21:21: It knows what those stars should look like, theoretically. 21:25: Then it builds a complex mathematical model of the atmosphere’s distorting effect across the entire field of view that would explain the observed star shapes. 21:33: It iterates this model hundreds of times per image until it finds the best fit. [The model is created by iterating on the image data, but iteration is not necessary for every image.] 21:38: Then it uses that model to correct the image, removing the atmospheric blurring. 21:43: So it calculates the distortion instead of measuring it directly with a laser. 21:46: Essentially, yes. 21:48: Now, interestingly, there is an auxiliary telescope built alongside Ruben, specifically designed to measure atmospheric properties independently. 21:55: Oh, so they could use that data. 21:57: They could, but currently, they’re finding their software modeling approach using the science images themselves, works so well that they aren’t actively incorporating the data from the auxiliary telescope for that correction right now. 22:08: The software solution is proving powerful enough on its own. 22:11: Fascinating. 22:12: And they still have to coordinate with other telescopes about their lasers, right? 22:15: Oh yeah. 22:15: They have agreements about when nearby observatories can point their lasers, and sometimes Ruben might have to switch to a specific filter like the Iband, which is less sensitive to the laser. 22:25: Light if one is active nearby while they’re trying to focus. 22:28: So many interacting systems. 22:30: What an incredible journey through the engineering of Ruben. 22:33: Just the sheer ingenuity from the custom steel pier and the capacitor banks, the hexapods, that incredibly flat camera, the data systems. 22:43: It’s truly a machine built to push boundaries. 22:45: It really is. 22:46: And it’s important to remember, this isn’t just, you know, a bigger version of existing telescopes. 22:51: It’s a fundamentally different kind of machine. 22:53: How so? 22:54: By creating this massive all-purpose data set, imaging the entire southern sky over 800 times, cataloging maybe 40 billion objects, it shifts the paradigm. 23:07: Astronomy becomes less about individual scientists applying for time to point a telescope at one specific thing and more about statistical analysis, about mining this unprecedented ocean of data that Rubin provides to everyone. 23:21: So what does this all mean for us, for science? 23:24: Well, it’s a generational investment in fundamental discovery. 23:27: They’ve optimized this whole system, the telescope, the camera, the data pipeline. 23:31: For finding, quote, exactly the stuff we don’t know we’ll find. 23:34: Optimized for the unknown, I like that. 23:36: Yeah, we’re basically generating this incredible resource that will feed generations of astronomers and astrophysicists. 23:42: They’ll explore it, they’ll harvest discoveries from it, they’ll find patterns and objects and phenomena within billions and billions of data points that we can’t even conceive of yet. 23:50: And that really is the ultimate excitement, isn’t it? 23:53: Knowing that this monumental feat of engineering isn’t just answering old questions, but it’s poised to open up entirely new questions about the universe, questions we literally don’t know how to ask today. 24:04: Exactly. 24:05: So, for you, the listener, just think about that. 24:08: Consider the immense, the completely unknown discoveries that are waiting out there just waiting to be found when an entire universe of data becomes accessible like this. 24:16: What might we find? Back to top

2 months ago 56 votes

More in AI

ML for SWEs 65: The AI bubble is popping and why that's a good thing

The future of the industry and how to get the most out of your AI coding assistant

an hour ago 1 votes
Pluralistic: By all means, tread on those people (26 Aug 2025)

Today's links By all means, tread on those people: We know you love freedom, we just wish you'd share. Hey look at this: Delights to delectate. Object permanence: The right to bear cameras; GOP wants slavery for undocumented migrants; Telepresence Nazi-punching. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. By all means, tread on those people (permalink) Just as Martin Niemöller's "First They Came" has become our framework for understanding the rise of fascism in Nazi Germany, so, too is Wilhoit's Law the best way to understand America's decline into fascism: https://en.wikipedia.org/wiki/First_They_Came In case you're not familiar with Frank Wilhoit's amazing law, here it is: Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect. https://crookedtimber.org/2018/03/21/liberals-against-progressives/#comment-729288 The thing that makes Wilhoit's Law so apt to this moment – and to our understanding of the recent history that produced this moment – is how it connects the petty with the terrifying, the trivial with the radical, the micro with the macro. It's a way to join the dots between fascists' business dealings, their interpersonal relationships, and their political views. It describes a continuum that ranges from minor commercial grifts to martial law, and shows how tolerance for the former creates the conditions for the latter. The gross ways in which Wilhoit's Law applies are easy to understand. The dollar value of corporate wage-theft far outstrips the total dollars lost to all other forms of property crime, and yet there is virtually no enforcement against bosses who steal their workers' paychecks, while petty property crimes can result in long prison sentences (depending on your skin color and/or bank balance): https://www.opportunityinstitute.org/blog/post/organized-retail-theft-wage-theft/ Elon Musk values "free speech" and insists on his right to brand innocent people as "pedos," but he also wants the courts to destroy organizations that publish their opinions about his shitty business practices: https://www.mediamatters.org/elon-musk Fascists turn crybaby when they're imprisoned for attempting a murderous coup, but buy merch celebrating the construction of domestic concentration camps where people are locked up without trial: https://officialalligatoralcatraz.com/shop That stuff is all easy to see, but I want to draw a line between these gross violations of Wilhoit's Law and pettier practices that have been creating the conditions for the present day Wilhoit Dystopia. Take terms of service. The Federalist Society – whose law library could save a lot of space by throwing away all its books and replacing them with a framed copy of Wilhoit's Law – has long held that merely glancing at a web-page or traversing the doorway of a shop is all it takes for you to enter into a "contract" by which you surrender all of your rights. Every major corporation – and many smaller ones – now routinely seek to bind both workers and customers to garbage-novellas of onerous, unreadable legal conditions. If we accept that this is how contracts work, then this should be perfectly valid, right? By reading these words, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. This indemnity will survive the termination of your relationship with your employer. I mean, why not? What principle – other than "in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect" – makes terms of service valid, and this invalid? Then there's binding arbitration. Corporations routinely bind their workers and customers to terms that force them to surrender their right to sue, no matter how badly they are injured through malice or gross negligence. This practice used to be illegal, until Antonin Scalia opened the hellmouth and unleashed binding arbitration on the world: https://brooklynworks.brooklaw.edu/cgi/viewcontent.cgi?article=1443&&context=blr There's a pretty clever hack around binding arbitration: mass arbitration, whereby lots of wronged people coordinate to file claims, which can cost a dirty corporation more than a plain old class-action suit: https://pluralistic.net/2021/06/02/arbitrary-arbitration/#petard Of course, Wilhoit's Law provides corporations with a way around this: they can reserve the right not to arbitrate and to force you into a class action suit if that's advantageous to them: https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements Heads they win, tails you lose. Or take the nature of property rights themselves. Conservatives say they revere property rights above all else, claiming that every other human right stems from the vigorous enforcement of property relations. What is private property? For that, we turn to the key grifter thinkfluencer Sir William Blackstone, and his 1768 "Commentaries on the Laws of England": That sole and despotic dominion which one man claims and exercises over the external things of the world, in total exclusion of the right of any other individual in the universe. https://oll.libertyfund.org/pages/blackstone-on-property-1753 Corporations love the idea of their property rights, but they're not so keen on your property rights. Think of the practice of locking down digital devices – from phones to cars to tractors – so that they can't be repaired by third parties, use generic ink or parts, or load third-party apps except via an "app store": https://memex.craphound.com/2012/01/10/lockdown-the-coming-war-on-general-purpose-computing/ A device you own, but can only use in ways that its manufacturer approves of, sure doesn't sound like "sole and despotic dominion" to me. Some corporations (and their weird apologists) like to claim that, by buying their product, you've agreed not to use it except in ways that benefit their shareholders, even when that is to your own detriment: https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones Apple will say, "We've been selling iPhones for nearly 20 years now. It can't possibly come as a surprise to you that you're not allowed to install apps that we haven't approved. If that's important to you, you shouldn't have bought an iPhone." But the obvious rejoinder to this is, "People have been given sole and despotic dominion over the things they purchased since time immemorial. If the thought of your customers using their property in ways that displease you causes you to become emotionally disregulated, perhaps you shouldn't have gotten into the manufacturing business." But as indefensibly wilhoitian as Apple's behavior might be, Google has just achieved new depths of wilhoitian depravity, with a rule that says that starting soon, you will no longer be able to install apps of your choosing on your Android device unless Google first approves of them: https://9to5google.com/2025/08/25/android-apps-developer-verification/ Like Apple, Google says that this is to prevent you from accidentally installing malicious software. Like Apple, Google does put a lot of effort into preventing its customers from being remotely attacked. And, like Apple, Google will not protect you from itself: https://pluralistic.net/2023/02/05/battery-vampire/#drained When it comes to vetoing your decisions about which programs your Android device can run, Google has an irreconcilable conflict of interest. Google, after all, is a thrice-convicted monopolist who have an interest in blocking you from installing programs that interfere with its profits, under the pretense of preventing you from coming to harm. And – like Apple – Google has a track record of selling its users out to oppressive governments. Apple blocked all working privacy tools for its Chinese users at the behest of the Chinese government, while Google secretly planned to release a version of its search engine that would enforce Chinese censorship edicts and help the Chinese government spy on its people: https://en.wikipedia.org/wiki/Dragonfly_(search_engine) Google's CEO Sundar Pichai, personally gave one million dollars to Donald Trump for a seat on the dais at this year's inauguration (so did Apple CEO Tim Cook). Both men are in a position to help the self-described dictator make good on his promise to spy on and arrest Americans who disagree with his totalitarian edicts. All of this makes Google's announcement extraordinarily reckless, but also very, very wilhoitian. After all, Google jealously guards its property rights from you, but insists that your property rights need to be subordinated to its corporate priorities: "in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." We can see this at work in the way that Google treats open source software and free software. Google's software is "open source" – for us. We have the right to look at the code and do free work for Google to identify and fix bugs in the code. But only Google gets a say in how that code is deployed on its cloud servers. They have software freedom, while we merely have software transparency: https://pluralistic.net/2025/07/14/pole-star/#gnus-not-utilitarian Big companies love to both assert their own property rights while denying you yours. Take the music industry: they are required to pay different royalties to musicians depending on whether they're "selling" music, or "licensing" music. Sales pay a fraction of the royalties of a licensing deal, so it's far better for musicians when their label licenses their music than when they sell it. When you or I click the "buy" button in an online music store, we are confronted with a "licensing agreement," that limits what we may do with our digital purchase. Things that you get automatically when you buy music in physical form – on a CD, say – are withheld through these agreements. You can't re-sell your digital purchases as used goods. You can't give them away. You can't lend them out. You can't divide them up in a divorce. You can't leave them to your kids in your will. It's not a sale, so the file isn't your property. But when the label accounts for that licensing deal to a musician, the transaction is booked as a sale, which entitles the creative worker to a fraction of the royalties that they'd get from a license. Somehow, digital media exists in quantum superposition: it is a licensing deal when we click the buy button, but it is a sale when it shows up on a royalty statement. It's Schroedinger's download: https://pluralistic.net/2022/06/21/early-adopters/#heads-i-win Now, a class action suit against Amazon over this very issue has been given leave to progress to trial: https://www.hollywoodreporter.com/business/business-news/prime-video-lawsuit-movie-license-ownership-1236353127/ The plaintiffs insist that because Amazon showed them a button that said, "Buy this video" but then slapped it with licensing conditions that take away all kinds of rights (Amazon can even remotely delete your videos after you "buy" them) that they have been ripped off in a bait-and-switch. Amazon's defense is amazing. They've done what any ill-prepared fifth grader would do when called on the carpet; they quoted Webster's: Quoting Webster’s Dictionary, it said that the term means “rights to the use or services of payment” rather than perpetual ownership and that its disclosures properly warn people that they may lose access. People are increasingly pissed off with this bullshit, whereby things that you "buy" are not yours, and your access to them can be terminated at any time. The Stop Killing Games campaign is pushing for the rights of gamers to own the games they buy forever, even if the company decides to shut down its servers: https://www.stopkillinggames.com/ I've been pissed off about this bullshit since forever. It's one of the main reasons I convinced my publishers to let me sell my own ebooks and audiobooks, out of my own digital storefront. All of those books are sold, not licensed, and come without any terms or conditions: https://craphound.com/shop/ The ability to change the terms after the sale is a major source of enshittification. I call it the "Darth Vader MBA," as in "I am altering the deal. Pray I do not alter it any further": https://pluralistic.net/2023/10/26/hit-with-a-brick/#graceful-failure Naturally the ebooks and audiobooks in the Kickstarter for pre-sales of my next book, Enshittification are also sold without any terms and conditions: https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook/ Look, I don't think that personal consumption choices can fix systemic problems. You're not going to fix enshittification – let alone tyranny – by shopping, even if you're very careful: https://pluralistic.net/2025/07/31/unsatisfying-answers/#systemic-problems But that doesn't mean that there isn't a connection between the unfair bullshit that monopolies cram down our throat and the rise of fascism. It's not just that the worst enshittifiers also the biggest Trump donors, it's that Wilhoit's Law powers enshittification. Wiloitism is shot through the Maga movement. The Flu Klux Klan wants to ban you from wearing a mask for health reasons, but they will defend to the death the right of ICE brownshirts to run around in gaiters and Oakleys as they kidnap our neighbors off the streets. Conservative bedwetters will donate six figures to a Givesendgo set up by some crybaby with a viral Rumble video about getting 86'ed from a restaurant for wearing a Maga hat, but they literally want to imprison trans people for wearing clothes that don't conform to their assigned-at-birth genders. They'll piss and moan about being "canceled" because of hecklers at the speeches they give for the campus chapter of the Hitler Youth, but they experience life-threatening priapism when students who object to the Israeli genocide of Palestinians are expelled, arrested and deported. Then there's their abortion policies, which hold that personhood begins at conception, but ends at birth, and can only be re-established by forming an LLC. It's "in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect" all the way down. I'm not saying that bullshit terms of service, wage theft, binding arbitration gotchas, or victim complexes about your kids going no-contact because you won't shut the fuck up about "the illegals" at Thanksgiving are the same as the actual fascist dictatorship being born around us right now or the genocide taking place in Gaza. But I am saying that they come from the same place. The ideology of "in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect" underpins the whole ugly mess. After we defeat these fucking fascists, after the next installment of the Nuremburg trials, after these eichmenn and eichwomenn get their turns in the dock, we're going to have to figure out how to keep them firmly stuck to the scrapheap of history. For this, I propose a form of broken windows policing; zero-tolerance for any activity or conduct that implies that there are "in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." We should treat every attempt to pull any of these scams as an inch (or a yard, or a mile) down the road to fascist collapse. We shouldn't suffer practitioners of this ideology to be in our company, to run our institutions, or to work alongside of us. We should recognize them for the monsters they are. Hey look at this (permalink) Citizen Is Using AI to Generate Crime Alerts With No Human Review. It’s Making a Lot of Mistakes https://www.404media.co/citizen-is-using-ai-to-generate-crime-alerts-with-no-human-review-its-making-a-lot-of-mistakes/ How To Argue With An AI Booster https://www.wheresyoured.at/how-to-argue-with-an-ai-booster/ We must fight age verification with all we have https://www.usermag.co/p/we-must-fight-age-verification-with Sqinks: A Transreal Cyberpunk Love Story https://www.kickstarter.com/projects/rudyrucker/sqinks LibreOffice 25.8: a Strategic Asset for Governments and Enterprises Focused on Digital Sovereignty and Privacy https://blog.documentfoundation.org/blog/2025/08/25/libreoffice-25-8-backgrounder/ Object permanence (permalink) #20yrsago Oakland sheriffs detain people for carrying cameras https://thomashawk.com/2005/08/right-to-bear-cameras.html #10yrsago New Zealand gov’t promises secret courts for accused terrorists https://www.nzherald.co.nz/nz/attorney-general-says-law-society-got-it-wrong-over-secret-courts/E5JHYBTMVSIBZ62UNGEWB4DPEA/?c_id=1&objectid=11503094 #10yrsago Platform Cooperativism: a worker-owned Uber for everything https://platformcoop.net/ #10yrsago GOP “kingmaker” proposes enslavement as an answer to undocumented migrants https://www.thedailybeast.com/iowa-gop-kingmaker-has-a-slavery-proposal-for-immigration/ #10yrsago Six years after unprovoked beating, Denver cop finally fired https://kdvr.com/news/video-evidence-determined-fate-of-denver-officer-in-excessive-force-dispute-fired-after-6-years/ #10yrsago Samsung fridges can leak your Gmail logins https://web.archive.org/web/20150825014450/https://www.pentestpartners.com/blog/hacking-defcon-23s-iot-village-samsung-fridge/ #10yrsago German student ditches apartment, buys an unlimited train pass https://www.washingtonpost.com/news/worldviews/wp/2015/08/22/how-one-german-millennial-chose-to-live-on-trains-rather-than-pay-rent/ #10yrsago Ashley Madison’s founding CTO claimed he hacked competing dating site https://www.wired.com/2015/08/ashley-madison-leak-reveals-ex-cto-hacked-competing-site/ #5yrsago Telepresence Nazi-punching https://pluralistic.net/2020/08/25/anxietypunk/#smartibots #5yrsago Ballistic Kiss https://pluralistic.net/2020/08/25/anxietypunk/#bk Upcoming appearances (permalink) Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ DC: Enshittification at Politics and Prose, Oct 8 https://politics-prose.com/cory-doctorow-10825 New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Kara Swisher (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Divesting from Amazon’s Audible and the Fight for Digital Rights (Libro.fm) https://pocketcasts.com/podcasts/9349e8d0-a87f-013a-d8af-0acc26574db2/00e6cbcf-7f27-4589-a11e-93e4ab59c04b The Utopias Podcast https://www.buzzsprout.com/2272465/episodes/17650124 Tariffs vs IP Law (Firewalls Don't Stop Dragons) https://www.youtube.com/watch?v=LFABFe-5-uQ Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. (1019 words yesterday, 42282 words total). A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

yesterday 3 votes
Pluralistic: Friction cannot be reduced, it can only be redistributed (23 Aug 2025)

Today's links Friction cannot be reduced, it can only be redistributed: The better things are for your boss, the worse they are for you. Hey look at this: Delights to delectate. Object permanence: Dr Bruce Sterling; Buddy Holly's overnight bag; Zombie postcapitalism. Upcoming appearances: Where to find me. Recent appearances: Where I've been. Latest books: You keep readin' em, I'll keep writin' 'em. Upcoming books: Like I said, I'll keep writin' 'em. Colophon: All the rest. Friction cannot be reduced, it can only be redistributed (permalink) Despite the pretensions of certain well-paid economists, political economy is not a "physics of human behavior," through which human interactions and outcomes can be quantized and precisely captured through mathematical models. For one thing, in physics, it's possible to reduce friction, whereas in political economy, friction isn't something you reduce, it's something you redistribute, typically downward, to people with less political power than you. Think about your job. If you are on a salary, your boss has to pay you even when there's no work to be done, which means that during times where there's no income, your boss still has to pay your wages, meaning that a long slow patch could kill the business. But if your boss can eliminate or reduce your wages when there's no work, the friction of figuring out how to keep your boss's business a going concern is shifted to you. Take the "tipped minimum wage," which is the minimum that a restaurateur can pay a server. The federal tipped minimum wage is $2.15/hour, which is substantially less than you can survive on. If your boss fucks up and can't fill the tables in his restaurant, he has to pay you $7.25/hour (the federal minimum wage). But if you get just one table in eight hours, where you bust your hump and earn a $41 tip, your boss gets to keep $40.90 of that money and pay you the grand sum of $58. That certainly relieves some of your boss's friction – but now you have to endure the friction of figuring out how to survive on $58. Maybe you don't fix your car and instead spend an extra hour at the start and end of your shift on a city bus. That's a lot of friction, but it's your friction. Same for the time you spend lining up at the food bank, the sleepless nights you endure because you can't see a dentist about your rotten tooth, the diabetes test-strips you do without. Of course, there's plenty of workers who don't even get the tipped minimum wage: in most of the country, "gig economy" workers aren't guaranteed any wages. If your boss – the company that made your app – fucked up by charging too much or skimping on ads or having piss-poor customer service, you can clock on for an eight-hour shift and get zero dollars, all the while being available to your boss, just in case they do get a customer. If you're a driver, you only get paid for the time when you're on a delivery or have a passenger, and you bear the expense of the rest of the hours you spend prowling the streets, waiting for a call-out. This allows gig companies to build up a giant workforce that can absorb orders when they come in, while shifting the friction of living on half-wages to the workers who only get paid on the way out to a delivery, but not on the way back. Return to office? An exercise in pure friction-shifting. The friction your boss experiences from furiously fantasizing about how lazy you're being at home is swapped for the friction you of your commute, the friction of having to reschedule deliveries that you weren't home to sign for, the friction of having to eat a packed lunch or waste your pay on overpriced, additive/grease/salt/sugar-laden quick-service food. The airline that fires most of its customer service staff shifts operational frictions onto passengers, from the friction of arriving two hours early to see one of the few check-in clerks to the friction of waiting for three hours on hold to rebook a canceled flight or find a lost bag. Southwest really takes the cake here. Remember a couple years ago when Southwest stranded one million passengers over Christmas week because its computers had all crashed? Turns out that the main thing SWA was doing with those computers was running a friction-shifting shell-game with its airplanes, pilots, flight attendants and passengers. SWA would sell tickets for more flights than it had planes, and then cancel the flights that had sold the fewest tickets: https://pluralistic.net/2023/01/10/the-courage-to-govern/#whos-in-charge That's quite a magnificent piece of friction-shifting. SWA is relieved of the friction of buying and maintaining a fleet of planes. The don't have to bear the friction of guessing which planes will and won't be full in advance. But SWA passengers get all the friction and more, when their flight is cancelled because other people – whom they have no control over – failed to buy enough tickets for it. Southwest "reduced friction" for its shareholders at the expense of its employees and customers. Other businesses "reduce friction" for one favored group at the expense of another, like Google, whose Youtube Content ID system makes it trivial to file a copyright takedown notice but hard-to-impossible to get your work reinstated when you are falsely accused: https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never That's shifting friction from large rightsholders (who can get infringing work removed without a trial) to creators (who don't get a day in court before their work is censored). Meanwhile, food delivery platforms shift friction onto restaurants, conscripting them into delivery services without their permission: https://pluralistic.net/2020/05/18/code-is-speech/#schadenpizza And onto drivers, who don't even rate the tipped minimum wage. For all that these companies come up with names for themselves like "Seamless," they are 100 percent seam, but those seams are shifted onto people without political or economic power. The MBA mind-virus turns its victims into "optimization"-obsessed zombies, but what they mean by "optimization" is that you will optimize your life to their benefit. HP uses software locks to "optimize" its printer business, forcing you to buy ink at $10,000/gallon: https://pluralistic.net/2024/09/30/life-finds-a-way/#ink-stained-wretches And Uber "optimizes" its drivers by spying on them and paying them less when the algorithm infers that they are more economically desperate: https://len-sherman.medium.com/how-uber-became-a-cash-generating-machine-ef78e7a97230 A better world is one in which the people optimize corporations and billionaires – by cutting them down to size and shattering their power. It's a world in which amassing obscene amounts of money and market power creates friction, in the form of endless regulatory and tax scrutiny. It's a world where public transit has priority and private cars are taxed for slowing the rest of us down as we go about our days. It's a world where workers are frictionless: protected from noncompete agreements and baroque wage theft schemes like those used to impoverish service and gig workers. It's a world where bosses experience friction, in the form of obligations to the workers whose labor generates their wealth. I really believe that – politically speaking – friction can't be destroyed, only redistributed. And I'm fine with that, really – provided we're redistributing it upwards. Hey look at this (permalink) I reported from an ICE action on Sansome and all I got was a face full of pepper spray https://sf.gazetteer.co/i-reported-from-an-ice-action-on-sansome-and-all-i-got-was-a-face-full-of-pepper-spray When Trump's Brain Broke https://www.unpopularfront.news/p/when-trumps-brain-broke Bank forced to rehire workers after lying about chatbot productivity, union says https://arstechnica.com/tech-policy/2025/08/bank-forced-to-rehire-workers-after-lying-about-chatbot-productivity-union-says/ I Made a Floppy Disk from Scratch https://www.youtube.com/watch?v=TBiFGhnXsh8 America's Kryptonite https://orphansandempires.substack.com/p/americas-kryptonite Object permanence (permalink) #20yrsago Bruce Sterling gets an honorary doctorate https://web.archive.org/web/20051226234102/http://wiredblogs.tripod.com/sterling/index.blog?entry_id=1200350 #20yrsago Beastie Boys release vocals-only tracks to encourage remixers https://web.archive.org/web/20050930220256/http://www.beastieboys.com/remixers.php #20yrsago What was in Buddy Holly’s plane-crash overnight bag? https://web.archive.org/web/20051023162927/http://www.rockin50s.com/bag.htm #20yrsago Warner Music CEO calls for iPod taxes, levies — twirls moustache and cackles, clatters away on tiny, ebony hooves https://web.archive.org/web/20050910183217/http://news.com.com/Warner+Music+readies+CD-free+e-label/2100-1027_3-5841355.html #20yrsago Customers of new UK ISP get to share all Sony music on P2P https://www.theguardian.com/technology/2005/aug/22/media.newmedia #10yrsago Greece’s creditors demand casino rights, archaeological sites, selloff of EUR50B of national assets https://web.archive.org/web/20150824062007/https://www.globaljustice.org.uk/blog/2015/aug/19/greece-sale-–-and-everything-must-go #5yrsago Zombie postcapitalism https://pluralistic.net/2020/08/22/this-machine-kills-fascists/#varoufakis Upcoming appearances (permalink) Ithaca: AD White keynote (Cornell), Sep 12 https://deanoffaculty.cornell.edu/events/keynote-cory-doctorow-professor-at-large/ DC: Enshittification at Politics and Prose, Oct 8 https://politics-prose.com/cory-doctorow-10825 New Orleans: DeepSouthCon63, Oct 10-12 http://www.contraflowscifi.org/ Chicago: Enshittification with Kara Swisher (Chicago Humanities), Oct 15 https://www.oldtownschool.org/concerts/2025/10-15-2025-kara-swisher-and-cory-doctorow-on-enshittification/ San Francisco: Enshittification at Public Works (The Booksmith), Oct 20 https://app.gopassage.com/events/doctorow25 Miami: Enshittification at Books & Books, Nov 5 https://www.eventbrite.com/e/an-evening-with-cory-doctorow-tickets-1504647263469 Recent appearances (permalink) Divesting from Amazon’s Audible and the Fight for Digital Rights (Libro.fm) https://pocketcasts.com/podcasts/9349e8d0-a87f-013a-d8af-0acc26574db2/00e6cbcf-7f27-4589-a11e-93e4ab59c04b The Utopias Podcast https://www.buzzsprout.com/2272465/episodes/17650124 Tariffs vs IP Law (Firewalls Don't Stop Dragons) https://www.youtube.com/watch?v=LFABFe-5-uQ Latest books (permalink) "Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels). "The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). "The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245). "Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com Upcoming books (permalink) "Canny Valley": A limited edition collection of the collages I create for Pluralistic, self-published, September 2025 "Enshittification: Why Everything Suddenly Got Worse and What to Do About It," Farrar, Straus, Giroux, October 7 2025 https://us.macmillan.com/books/9780374619329/enshittification/ "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2026 "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026 "The Memex Method," Farrar, Straus, Giroux, 2026 "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, 2026 Colophon (permalink) Today's top sources: Currently writing: "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. (1025 words yesterday, 40200 words total). A Little Brother short story about DIY insulin PLANNING This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. How to get Pluralistic: Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/@pluralistic Medium (no ads, paywalled): https://doctorow.medium.com/ Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer. ISSN: 3066-764X

4 days ago 7 votes
AI Roundup 132: The B-word

August 22, 2025.

5 days ago 12 votes