More from IEEE Spectrum
This article is excerpted from Every American an Innovator: How Innovation Became a Way of Life, by Matthew Wisnioski (The MIT Press, 2025). Imagine a point-to-point transportation service in which two parties communicate at a distance. A passenger in need of a ride contacts the service via phone. A complex algorithm based on time, distance, and volume informs both passenger and driver of the journey’s cost before it begins. This novel business plan promises efficient service and lower costs. It has the potential to disrupt an overregulated taxi monopoly in cities across the country. Its enhanced transparency may even reduce racial discrimination by preestablishing pickups regardless of race. aspect_ratio Every American an Innovator: How Innovation Became a Way of Life, by Matthew Wisnioski (The MIT Press, 2025).The MIT Press Carnegie Mellon University. The dial-a-ride service was designed to resurrect a defunct cab company that had once served Pittsburgh’s African American neighborhoods. National Science Foundation, the CED was envisioned as an innovation “hatchery,” intended to challenge the norms of research science and higher education, foster risk-taking, birth campus startups focused on market-based technological solutions to social problems, and remake American science to serve national needs. Are innovators born or made? During the Cold War, the model for training scientists and engineers in the United States was one of manpower in service to a linear model of innovation: Scientists pursued “basic” discovery in universities and federal laboratories; engineer–scientists conducted “applied” research elsewhere on campus; engineers developed those ideas in giant teams for companies such as Lockheed and Boeing; and research managers oversaw the whole process. This model dictated national science policy, elevated the scientist as a national hero in pursuit of truth beyond politics, and pumped hundreds of millions of dollars into higher education. In practice, the lines between basic and applied research were blurred, but the perceived hierarchy was integral to the NSF and the university research culture that it helped to foster. RELATED: Innovation Magazine and the Birth of a Buzzword The question was, how? And would the universities be willing to remake themselves to support innovation? The NSF experiments with innovation At the Utah Innovation Center, engineering students John DeJong and Douglas Kihm worked on a programmable electronics breadboard.Special Collections, J. Willard Marriott Library, The University of Utah In 1972, NSF director H. Guyford Stever established the Office of Experimental R&D Incentives to “incentivize” innovation for national needs by supporting research on “how the government [could] most effectively accelerate the transfer of new technology into productive enterprise.” Stever stressed the experimental nature of the program because many in the NSF and the scientific community resisted the idea of goal-directed research. Innovation, with its connotations of profit and social change, was even more suspect. To lead the initiative, Stever appointed C.B. Smith, a research manager at United Aircraft Corp., who in turn brought in engineers with industrial experience, including Robert Colton, an automotive engineer. Colton led the university Innovation Center experiment that gave rise to Carnegie Mellon’s CED. The NSF chose four universities that captured a range of approaches to innovation incubation. MIT targeted undergrads through formal coursework and an innovation “co-op” that assisted in turning ideas into products. The University of Oregon evaluated the ideas of garage inventors from across the country. The University of Utah emphasized an ecosystem of biotech and computer graphics startups coming out of its research labs. And Carnegie Mellon established a nonprofit corporation to support graduate student ventures, including the dial-a-ride service. Grad student Fritz Faulhaber holds one of the radio-coupled taxi meters that Carnegie Mellon students installed in Pittsburgh cabs in the 1970s.Ralph Guggenheim;Jerome McCavitt/Carnegie-Mellon Alumni News Carnegie Mellon got one of the first university incubators Carnegie Mellon had all the components that experts believed were necessary for innovation: strong engineering, a world-class business school, novel approaches to urban planning with a focus on community needs, and a tradition of industrial design and the practical arts. CMU leaders claimed that the school was smaller, younger, more interdisciplinary, and more agile than MIT. Dwight Baumann. Baumann exemplified a new kind of educator-entrepreneur. The son of North Dakota farmers, he had graduated from North Dakota State University, then headed to MIT for a Ph.D. in mechanical engineering, where he discovered a love of teaching. He also garnered a reputation as an unusually creative engineer with an interest in solving problems that addressed human needs. In the 1950s and 1960s, first as a student and then as an MIT professor, Baumann helped develop one of the first computer-aided-design programs, as well as computer interfaces for the blind and the nation’s first dial-a-ride paratransit system. Dwight Baumann, director of Carnegie Mellon’s Center for Entrepreneurial Development, believed that a modern university should provide entrepreneurial education. Carnegie Mellon University Archives The CED’s mission was to support entrepreneurs in the earliest stages of the innovation process when they needed space and seed funding. It created an environment for students to make a “sequence of nonfatal mistakes,” so they could fail and develop self-confidence for navigating the risks and uncertainties of entrepreneurial life. It targeted graduate students who already had advanced scientific and engineering training and a viable idea for a business. Carnegie Mellon’s dial-a-ride service replicated the Peoples Cab Co., which had provided taxi service to Black communities in Pittsburgh. Charles “Teenie” Harris/Carnegie Museum of Art/Getty Images A few CED students did create successful startups. The breakout hit was Compuguard, founded by electrical engineering Ph.D. students Romesh Wadhwani and Krishnahadi Pribad, who hailed from India and Indonesia, respectively. The pair spent 18 months developing a security bracelet that used wireless signals to protect vulnerable people in dangerous work environments. But after failing to convert their prototype into a working design, they pivoted to a security- and energy-monitoring system for schools, prisons, and warehouses. Wadhwani Foundation supports innovation and entrepreneurship education worldwide, particularly in emerging economies. Wharton School and elsewhere. In 1983, Baumann’s onetime partner Jack Thorne took the lead of the new Enterprise Corp., which aimed to help Pittsburgh’s entrepreneurs raise venture capital. Baumann was kicked out of his garage to make room for the initiative. Was the NSF’s experiment in innovation a success? As the university Innovation Center experiment wrapped up in the late 1970s, the NSF patted itself on the back in a series of reports, conferences, and articles. “The ultimate effect of the Innovation Centers,” it stated, would be “the regrowth of invention, innovation, and entrepreneurship in the American economic system.” The NSF claimed that the experiment produced dozens of new ventures with US $20 million in gross revenue, employed nearly 800 people, and yielded $4 million in tax revenue. Yet, by 1979, license returns from intellectual property had generated only $100,000. “Today, the legacies of the NSF experiment are visible on nearly every college campus.” Critics included Senator William Proxmire of Wisconsin, who pointed to the banana peelers, video games, and sports equipment pursued in the centers to lambast them as “wasteful federal spending” of “questionable benefit to the American taxpayer.” And so the impacts of the NSF’s Innovation Center experiment weren’t immediately obvious. Many faculty and administrators of that era were still apt to view such programs as frivolous, nonacademic, or not worth the investment.
Eight years is a long time in the world of patents. When we last published what we then called the Patent Power Scorecard, in 2017, it was a different technological and social landscape—Google had just filed a patent application on the transformer architecture, a momentous advance that spawned the generative AI revolution. China was just beginning to produce quality, affordable electric vehicles at scale. And the COVID pandemic wasn’t on anyone’s dance card. Eight years is also a long time in the world of magazines, where we regularly play around with formats for articles and infographics. We now have more readers online than we do in print, so our art team is leveraging advances in interactive design software to make complex datasets grokkable at a glance, whether you’re on your phone or flipping through the pages of the magazine. The scorecard’s return in this issue follows the return last month of The Data, which ran as our back page for several years; it’s curated by a different editor every month and edited by Editorial Director for Content Development Glenn Zorpette. As we set out to recast the scorecard for this decade, we sought to strike the right balance between comprehensiveness and clarity, especially on a mobile-phone screen. As our Digital Product Designer Erik Vrielink, Assistant Editor Gwendolyn Rak, and Community Manager Kohava Mendelsohn explained to me, they wanted something that would be eye-catching while avoiding information overload. The solution they arrived at—a dynamic sunburst visualization—lets readers grasp the essential takeaways at glance in print, while the digital version, allows readers to dive as deep as they want into the data. Working with sci-tech-focused data-mining company 1790 Analytics, which we partnered with on the original Patent Power Scorecard, the team prioritized three key metrics or characteristics: patent Pipeline Power (which goes beyond mere quantity to assess quality and impact), number of patents, and the country where companies are based. This last characteristic has become increasingly significant as geopolitical tensions reshape the global technology landscape. As 1790 Analytics cofounders Anthony Breitzman and Patrick Thomas note, the next few years could be particularly interesting as organizations adjust their patenting strategies in response to changing market access. Some trends leap out immediately. In consumer electronics, Apple dominates Pipeline Power despite having a patent portfolio one-third the size of Samsung’s—a testament to the Cupertino company’s focus on high-impact innovations. The aerospace sector has seen dramatic consolidation, with RTX (formerly Raytheon Technologies) now encompassing multiple subsidiaries that appear separately on our scorecard. And in the university rankings, Harvard has seized the top spot from traditional tech powerhouses like MIT and Stanford, driven by patents that are more often cited as prior art in other recent patents. And then there are the subtle shifts that become apparent only when you dig deeper into the data. The rise of SEL (Semiconductor Energy Laboratory) over TSMC (Taiwan Semiconductor Manufacturing Co.) in semiconductor design, despite having far fewer patents, suggests again that true innovation isn’t just about filing patents—it’s about creating technologies that others build upon. Looking ahead, the real test will be how these patent portfolios translate into actual products and services. Patents are promises of innovation; the scorecard helps us see what companies are making those promises and the R&D investments to realize them. As we enter an era when technological leadership increasingly determines economic and strategic power, understanding these patterns is more crucial than ever.
Eight years is a long time in the world of patents. When we last published what we then called the Patent Power Scorecard, in 2017, it was a different technological and social landscape—Google had just filed a patent application on the transformer architecture, a momentous advance that spawned the generative AI revolution. China was just beginning to produce quality, affordable electric vehicles at scale. And the COVID pandemic wasn’t on anyone’s dance card. Eight years is also a long time in the world of magazines, where we regularly play around with formats for articles and infographics. We now have more readers online than we do in print, so our art team is leveraging advances in interactive design software to make complex datasets grokkable at a glance, whether you’re on your phone or flipping through the pages of the magazine. The scorecard’s return in this issue follows the return last month of The Data, which ran as our back page for several years; it’s curated by a different editor every month and edited by Editorial Director for Content Development Glenn Zorpette. As we set out to recast the scorecard for this decade, we sought to strike the right balance between comprehensiveness and clarity, especially on a mobile-phone screen. As our Digital Product Designer Erik Vrielink, Assistant Editor Gwendolyn Rak, and Community Manager Kohava Mendelsohn explained to me, they wanted something that would be eye-catching while avoiding information overload. The solution they arrived at—a dynamic sunburst visualization—lets readers grasp the essential takeaways at glance in print, while the digital version, allows readers to dive as deep as they want into the data. Working with sci-tech-focused data-mining company 1790 Analytics, which we partnered with on the original Patent Power Scorecard, the team prioritized three key metrics or characteristics: patent Pipeline Power (which goes beyond mere quantity to assess quality and impact), number of patents, and the country where companies are based. This last characteristic has become increasingly significant as geopolitical tensions reshape the global technology landscape. As 1790 Analytics cofounders Anthony Breitzman and Patrick Thomas note, the next few years could be particularly interesting as organizations adjust their patenting strategies in response to changing market access. Some trends leap out immediately. In consumer electronics, Apple dominates Pipeline Power despite having a patent portfolio one-third the size of Samsung’s—a testament to the Cupertino company’s focus on high-impact innovations. The aerospace sector has seen dramatic consolidation, with RTX (formerly Raytheon Technologies) now encompassing multiple subsidiaries that appear separately on our scorecard. And in the university rankings, Harvard has seized the top spot from traditional tech powerhouses like MIT and Stanford, driven by patents that are more often cited as prior art in other recent patents. And then there are the subtle shifts that become apparent only when you dig deeper into the data. The rise of SEL (Semiconductor Energy Laboratory) over TSMC (Taiwan Semiconductor Manufacturing Co.) in semiconductor design, despite having far fewer patents, suggests again that true innovation isn’t just about filing patents—it’s about creating technologies that others build upon. Looking ahead, the real test will be how these patent portfolios translate into actual products and services. Patents are promises of innovation; the scorecard helps us see what companies are making those promises and the R&D investments to realize them. As we enter an era when technological leadership increasingly determines economic and strategic power, understanding these patterns is more crucial than ever.
Sojourner sent back photos of the Martian surface during the summer of 1997. I was not alone. The servers at NASA’s Jet Propulsion Lab slowed to a crawl when they got more than 47 million hits (a record number!) from people attempting to download those early images of the Red Planet. To be fair, it was the late 1990s, the Internet was still young, and most people were using dial-up modems. By the end of the 83-day mission, Sojourner had sent back 550 photos and performed more than 15 chemical analyses of Martian rocks and soil. Sojourner, of course, remains on Mars. Pictured here is Marie Curie, its twin. Functionally identical, either one of the rovers could have made the voyage to Mars, but one of them was bound to become the famous face of the mission, while the other was destined to be left behind in obscurity. Did I write this piece because I feel a little bad for Marie Curie? Maybe. But it also gave me a chance to revisit this pioneering Mars mission, which established that robots could effectively explore the surface of planets and captivate the public imagination. Sojourner’s sojourn on Mars On 4 July 1997, the Mars Pathfinder parachuted through the Martian atmosphere and bounced about 15 times on glorified airbags before finally coming to a rest. The lander, renamed the Carl Sagan Memorial Station, carried precious cargo stowed inside. The next day, after the airbags retracted, the solar-powered Sojourner eased its way down the ramp, the first human-made vehicle to roll around on the surface of another planet. (It wasn’t the first extraterrestrial body, though. The Soviet Lunokhod rovers conducted two successful missions on the moon in 1970 and 1973. The Soviets had also landed a rover on Mars back in 1971, but communication was lost before the PROP-M ever deployed.) This giant sandbox at JPL provided Marie Curie with an approximation of Martian terrain. Mike Nelson/AFP/Getty Images Sojourner was equipped with three low-resolution cameras (two on the front for black-and-white images and a color camera on the rear), a laser hazard–avoidance system, an alpha-proton X-ray spectrometer, experiments for testing wheel abrasion and material adherence, and several accelerometers. The robot also demonstrated the value of the six-wheeled “rocker-bogie” suspension system that became NASA’s go-to design for all later Mars rovers. Sojourner never roamed more than about 12 meters from the lander due to the limited range of its radio. Pathfinder had landed in Ares Vallis, an assumed ancient floodplain chosen because of the wide variety of rocks present. Scientists hoped to confirm the past existence of water on the surface of Mars. Sojourner did discover rounded pebbles that suggested running water, and later missions confirmed it. A highlight of Sojourner’s 83-day mission on Mars was its encounter with a rock nicknamed Barnacle Bill [to the rover’s left]. JPL/NASA Sojourner rolled forward 36 centimeters and encountered a rock, dubbed Barnacle Bill due to its rough surface. The rover spent about 10 hours analyzing the rock, using its spectrometer to determine the elemental composition. Over the next few weeks, while the lander collected atmospheric information and took photos, the rover studied rocks in detail and tested the Martian soil. Marie Curie’s sojourn…in a JPL sandbox Meanwhile back on Earth, engineers at JPL used Marie Curie to mimic Sojourner’s movements in a Mars-like setting. During the original design and testing of the rovers, the team had set up giant sandboxes, each holding thousands of kilograms of playground sand, in the Space Flight Operations Facility at JPL. They exhaustively practiced the remote operation of Sojourner, including an 11-minute delay in communications between Mars and Earth. (The actual delay can vary from 7 to 20 minutes.) Even after Sojourner landed, Marie Curie continued to help them strategize. Initially, Sojourner was remotely operated from Earth, which was tricky given the lengthy communication delay. Mike Nelson/AFP/Getty Images Sojourner was maneuvered by an Earth-based operator wearing 3D goggles and using a funky input device called a Spaceball 2003. Images pieced together from both the lander and the rover guided the operator. It was like a very, very slow video game—the rover sometimes moved only a few centimeters a day. NASA then turned on Sojourner’s hazard-avoidance system, which allowed the rover some autonomy to explore its world. A human would suggest a path for that day’s exploration, and then the rover had to autonomously avoid any obstacles in its way, such as a big rock, a cliff, or a steep slope. Sojourner to operate for a week. But the little rover that could kept chugging along for 83 Martian days before NASA finally lost contact, on 7 October 1997. The lander had conked out on 27 September. In all, the mission collected 1.2 gigabytes of data (which at the time was a lot) and sent back 10,000 images of the planet’s surface. Marie Curie with the hopes of sending it on another mission to Mars. For a while, it was slated to be part of the Mars 2001 set of missions, but that didn’t happen. In 2015, JPL transferred the rover to the Smithsonian’s National Air and Space Museum. When NASA Embraced Faster, Better, Cheaper The Pathfinder mission was the second one in NASA administrator Daniel S. Goldin’s Discovery Program, which embodied his “faster, better, cheaper” philosophy of making NASA more nimble and efficient. (The first Discovery mission was to the asteroid Eros.) In the financial climate of the early 1990s, the space agency couldn’t risk a billion-dollar loss if a major mission failed. Goldin opted for smaller projects; the Pathfinder mission’s overall budget, including flight and operations, was capped at US $300 million. RELATED: How NASA Built Its Mars Rovers In his 2014 book Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus), science writer Rod Pyle interviews Rob Manning, chief engineer for the Pathfinder mission and subsequent Mars rovers. Manning recalled that one of the best things about the mission was its relatively minimal requirements. The team was responsible for landing on Mars, delivering the rover, and transmitting images—technically challenging, to be sure, but beyond that the team had no constraints. Sojourner was succeeded by the rovers Spirit, Opportunity, and Curiosity. Shown here are four mission spares, including Marie Curie [foreground]. JPL-Caltech/NASA Sojourner’s electronics warm enough to operate were leftover spares from the Galileo mission to Jupiter, so they were “free.” Pathfinder mission successful but it captured the hearts of Americans and reinvigorated an interest in exploring Mars. In the process, it set the foundation for the future missions that allowed the rovers Spirit, Opportunity, and Curiosity (which, incredibly, is still operating nearly 13 years after it landed) to explore even more of the Red Planet. How the rovers Sojourner and Marie Curie got their names To name its first Mars rovers, NASA launched a student contest in March 1994, with the specific guidance of choosing a “heroine.” Entry essays were judged on their quality and creativity, the appropriateness of the name for a rover, and the student’s knowledge of the woman to be honored as well as the mission’s goals. Students from all over the world entered. Sojourner Truth, while 18-year-old Deepti Rohatgi of Rockville, Md., came in second for hers on Marie Curie. Truth was a Black woman born into slavery at the end of the 18th century. She escaped with her infant daughter and two years later won freedom for her son through legal action. She became a vocal advocate for civil rights, women’s rights, and alcohol temperance. Curie was a Polish-French physicist and chemist famous for her studies of radioactivity, a term she coined. She was the first woman to win a Nobel Prize, as well as the first person to win a second Nobel. Nancy Grace Roman, the space agency’s first chief of astronomy. In May 2020, NASA announced it would name the Wide Field Infrared Survey Telescope after Roman; the space telescope is set to launch as early as October 2026, although the Trump administration has repeatedly said it wants to cancel the project. A Trillion Rogue Planets and Not One Sun to Shine on Them its naming policy in December 2022 after allegations came to light that James Webb, for whom the James Webb Space Telescope is named, had fired LGBTQ+ employees at NASA and, before that, the State Department. A NASA investigation couldn’t substantiate the allegations, and so the telescope retained Webb’s name. But the bar is now much higher for NASA projects to memorialize anyone, deserving or otherwise. (The agency did allow the hopping lunar robot IM-2 Micro Nova Hopper, built by Intuitive Machines, to be named for computer-software pioneer Grace Hopper.) Marie Curie and Sojourner will remain part of a rarefied clique. Sojourner, inducted into the Robot Hall of Fame in 2003, will always be the celebrity of the pair. And Marie Curie will always remain on the sidelines. But think about it this way: Marie Curie is now on exhibit at one of the most popular museums in the world, where millions of visitors can see the rover up close. That’s not too shabby a legacy either. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the June 2025 print issue. References Curator Matthew Shindell of the National Air and Space Museum first suggested I feature Marie Curie. I found additional information from the museum’s collections website, an article by David Kindy in Smithsonian magazine, and the book After Sputnik: 50 Years of the Space Age (Smithsonian Books/HarperCollins, 2007) by Smithsonian curator Martin Collins. NASA has numerous resources documenting the Mars Pathfinder mission, such as the mission website, fact sheet, and many lovely photos (including some of Barnacle Bill and a composite of Marie Curie during a prelaunch test). Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus, 2014) by Rod Pyle and Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hyperion, 2005) by planetary scientist Steve Squyres are both about later Mars missions and their rovers, but they include foundational information about Sojourner.
More in science
[Note that this article is a transcript of the video embedded above.] Wichita Falls, Texas, went through the worst drought in its history in 2011 and 2012. For two years in a row, the area saw its average annual rainfall roughly cut in half, decimating the levels in the three reservoirs used for the city’s water supply. Looking ahead, the city realized that if the hot, dry weather continued, they would be completely out of water by 2015. Three years sounds like a long runway, but when it comes to major public infrastructure projects, it might as well be overnight. Between permitting, funding, design, and construction, three years barely gets you to the starting line. So the city started looking for other options. And they realized there was one source of water nearby that was just being wasted - millions of gallons per day just being flushed down the Wichita River. I’m sure you can guess where I’m going with this. It was the effluent from their sewage treatment plant. The city asked the state regulators if they could try something that had never been done before at such a scale: take the discharge pipe from the wastewater treatment plant and run it directly into the purification plant that produces most of the city’s drinking water. And the state said no. So they did some more research and testing and asked again. By then, the situation had become an emergency. This time, the state said yes. And what happened next would completely change the way cities think about water. I’m Grady and this is Practical Engineering. You know what they say, wastewater happens. It wasn’t that long ago that raw sewage was simply routed into rivers, streams, or the ocean to be carried away. Thankfully, environmental regulations put a stop to that, or at least significantly curbed the amount of wastewater being set loose without treatment. Wastewater plants across the world do a pretty good job of removing pollutants these days. In fact, I have a series of videos that go through some of the major processes if you want to dive deeper after this. In most places, the permits that allow these plants to discharge set strict limits on contaminants like organics, suspended solids, nutrients, and bacteria. And in most cases, they’re individualized. The permit limits are based on where the effluent will go, how that water body is used, and how well it can tolerate added nutrients or pollutants. And here’s where you start to see the issue with reusing that water: “clean enough” is a sliding scale. Depending on how water is going to be used or what or who it’s going to interact with, our standards for cleanliness vary. If you have a dog, you probably know this. They should drink clean water, but a few sips of a mud puddle in a dirty street, and they’re usually just fine. For you, that might be a trip to the hospital. Natural systems can tolerate a pretty wide range of water quality, but when it comes to drinking water for humans, it should be VERY clean. So the easiest way to recycle treated wastewater is to use it in ways that don’t involve people. That idea’s been around for a while. A lot of wastewater treatment plants apply effluent to land as a disposal method, avoiding the need for discharge to a natural water body. Water soaks into the ground, kind of like a giant septic system. But that comes with some challenges. It only works if you’ve got a lot of land with no public access, and a way to keep the spray from drifting into neighboring properties. Easy at a small scale, but for larger plants, it just isn’t practical engineering. Plus, the only benefits a utility gets from the effluent are some groundwater recharge and maybe a few hay harvests per season. So, why not send the effluent to someone else who can actually put it to beneficial use? If only it were that simple. As soon as a utility starts supplying water to someone else, things get complicated because you lose a lot of control over how the effluent is used. Once it's out of your hands, so to speak, it’s a lot harder to make sure it doesn’t end up somewhere it shouldn’t, like someone’s mouth. So, naturally, the permitting requirements become stricter. Treatment processes get more complicated and expensive. You need regular monitoring, sampling, and laboratory testing. In many places in the world, reclaimed water runs in purple pipes so that someone doesn’t inadvertently connect to the lines thinking they’re potable water. In many cases, you need an agreement in place with the end user, making sure they’re putting up signs, fences, and other means of keeping people from drinking the water. And then you need to plan for emergencies - what to do if a pipe breaks, if the effluent quality falls below the standards, or if a cross-connection is made accidentally. It’s a lot of work - time, effort, and cost - to do it safely and follow the rules. And those costs have to be weighed against the savings that reusing water creates. In places that get a lot of rain or snow, it’s usually not worth it. But in many US states, particularly those in the southwest, this is a major strategy to reduce the demand on fresh water supplies. Think about all the things we use water for where its cleanliness isn’t that important. Irrigation is a big one - crops, pastures, parks, highway landscaping, cemeteries - but that’s not all. Power plants use huge amounts of water for cooling. Street sweeping, dust control. In nearly the entire developed world, we use drinking-quality water to flush toilets! You can see where there might be cases where it makes good sense to reclaim wastewater, and despite all the extra challenges, its use is fairly widespread. One of the first plants was built in 1926 at Grand Canyon Village which supplied reclaimed water to a power plant and for use in steam locomotives. Today, these systems can be massive, with miles and miles of purple pipes run entirely separate from the freshwater piping. I’ve talked about this a bit on the channel before. I used to live near a pair of water towers in San Antonio that were at two different heights above ground. That just didn’t make any sense until I realized they weren’t connected; one of them was for the reclaimed water system that didn’t need as much pressure in the lines. Places like Phoenix, Austin, San Antonio, Orange County, Irvine, and Tampa all have major water reclamation programs. And it’s not just a US thing. Abu Dhabi, Beijing, and Tel Aviv all have infrastructure to make beneficial use of treated municipal wastewater, just to name a few. Because of the extra treatment and requirements, many places put reclaimed water in categories based on how it gets used. The higher the risk of human contact, the tighter the pollutant limits get. For example, if a utility is just selling effluent to farmers, ranchers, or for use in construction, exposure to the public is minimal. Disinfecting the effluent with UV or chlorine may be enough to meet requirements. And often that’s something that can be added pretty simply to an existing plant. But many reclaimed water users are things like golf courses, schoolyards, sports fields, and industrial cooling towers, where people are more likely to be exposed. In those cases, you often need a sewage plant specifically designed for the purpose or at least major upgrades to include what the pros call tertiary treatment processes - ways to target pollutants we usually don’t worry about and improve the removal rates of the ones we do. These can include filters to remove suspended solids, chemicals that bind to nutrients, and stronger disinfection to more effectively kill pathogens. This creates a conundrum, though. In many cases, we treat wastewater effluent to higher standards than we normally would in order to reclaim it, but only for nonpotable uses, with strict regulations about human contact. But if it’s not being reclaimed, the quality standards are lower, and we send it downstream. If you know how rivers work, you probably see the inconsistency here. Because in many places, down the river, is the next city with its water purification plant whose intakes, in effect, reclaim that treated sewage from the people upstream. This isn’t theoretical - it’s just the reality of how humans interact with the water cycle. We’ve struggled with the problems it causes for ages. In 1906, Missouri sued Illinois in the Supreme Court when Chicago reversed their river, redirecting its water (and all the city’s sewage) toward the Mississippi River. If you live in Houston, I hate to break it to you, but a big portion of your drinking water comes from the flushes and showers in Dallas. There have been times when wastewater effluent makes up half of the flow in the Trinity River. But the question is: if they can do it, why can’t we? If our wastewater effluent is already being reused by the city downstream to purify into drinking water, why can’t we just keep the effluent for ourselves and do the same thing? And the answer again is complicated. It starts with what’s called an environmental buffer. Natural systems offer time to detect failures, dilute contaminants, and even clean the water a bit—sunlight disinfects, bacteria consume organic matter. That’s the big difference in one city, in effect, reclaiming water from another upstream. There’s nature in between. So a lot of water reclamation systems, called indirect potable reuse, do the same thing: you discharge the effluent into a river, lake, or aquifer, then pull it out again later for purification into drinking water. By then, it’s been diluted and treated somewhat by the natural systems. Direct potable reuse projects skip the buffer and pipe straight from one treatment plant to the next. There’s no margin for error provided by the environmental buffer. So, you have to engineer those same protections into the system: real-time monitoring, alarms, automatic shutdowns, and redundant treatment processes. Then there’s the issue of contaminants of emerging concern: pharmaceuticals, PFAS [P-FAS], personal care products - things that pass through people or households and end up in wastewater in tiny amounts. Individually, they’re in parts per billion or trillion. But when you close the loop and reuse water over and over, those trace compounds can accumulate. Many of these aren’t regulated because they’ve never reached concentrations high enough to cause concern, or there just isn’t enough knowledge about their effects yet. That’s slowly changing, and it presents a big challenge for reuse projects. They can be dealt with at the source by regulating consumer products, encouraging proper disposal of pharmaceuticals (instead of flushing them), and imposing pretreatment requirements for industries. It can also happen at the treatment plant with advanced technologies like reverse osmosis, activated carbon, advanced oxidation, and bio-reactors that break down micro-contaminants. Either way, it adds cost and complexity to a reuse program. But really, the biggest problem with wastewater reuse isn’t technical - it’s psychological. The so-called “yuck factor” is real. People don’t want to drink sewage. Indirect reuse projects have a big benefit here. With some nature in between, it’s not just treated wastewater; it’s a natural source of water with treated wastewater in it. It’s kind of a story we tell ourselves, but we lose the benefit of that with direct reuse: Knowing your water came from a toilet—even if it’s been purified beyond drinking water standards—makes people uneasy. You might not think about it, but turning the tap on, putting that water in a glass, and taking a drink is an enormous act of trust. Most of us don’t understand water treatment and how it happens at a city scale. So that trust that it’s safe to drink largely comes from seeing other people do it and past experience of doing it over and over and not getting sick. The issue is that, when you add one bit of knowledge to that relative void of understanding - this water came directly from sewage - it throws that trust off balance. It forces you not to rely not on past experience but on the people and processes in place, most of which you don’t understand deeply, and generally none of which you can actually see. It’s not as simple as just revulsion. It shakes up your entire belief system. And there’s no engineering fix for that. Especially for direct potable reuse, public trust is critical. So on top of the infrastructure, these programs also involve major public awareness campaigns. Utilities have to put themselves out there, gather feedback, respond to questions, be empathetic to a community’s values, and try to help people understand how we ensure water quality, no matter what the source is. But also, like I said, a lot of that trust comes from past experience. Not everyone can be an environmental engineer or licensed treatment plant operator. And let’s be honest - utilities can’t reach everyone. How many public meetings about water treatment have you ever attended? So, in many places, that trust is just going to have to be built by doing it right, doing it well, and doing it for a long time. But, someone has to be first. In the U.S., at least on the city scale, that drinking water guinea pig was Wichita Falls. They launched a massive outreach campaign, invited experts for tours, and worked to build public support. But at the end of the day, they didn’t really have a choice. The drought really was that severe. They spent nearly four years under intense water restrictions. Usage dropped to a third of normal demand, but it still wasn’t enough. So, in collaboration with state regulators, they designed an emergency direct potable reuse system. They literally helped write the rules as they went, since no one had ever done it before. After two months of testing and verification, they turned on the system in July 2014. It made national headlines. The project ran for exactly one year. Then, in 2015, a massive flood ended the drought and filled the reservoirs in just three weeks. The emergency system was always meant to be temporary. Water essentially went through three treatment plants: the wastewater plant, a reverse osmosis plant, and then the regular water purification plant. That’s a lot of treatment, which is a lot of expense, but they needed to have the failsafe and redundancy to get the state on board with the project. The pipe connecting the two plants was above ground and later repurposed for the city’s indirect potable reuse system, which is still in use today. In the end, they reclaimed nearly two billion gallons of wastewater as drinking water. And they did it with 100% compliance with the standards. But more importantly, they showed that it could be done, essentially unlocking a new branch on the skill tree of engineering that other cities can emulate and build on.
By weaving together existing railway lines, some cities can get the best transit in the world
This is the first touchpoint for science, we should make it more enticing
Friday is the summer solstice, the longest day of the year. We explain why. The post Friday is the Summer Solstice — Caused by Earth’s Ancient Accident appeared first on Andrew Fraknoi - Astronomy Lectures - Astronomy Education Resources.