Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
18
[Note that this article is a transcript of the video embedded above.] The original plan to get I-95 over the Baltimore Harbor was a double-deck bridge from Fort McHenry to Lazaretto Point. The problem with the plan was this: the bridge would have to be extremely high so that large ships could pass underneath, dwarfing and overshadowing one of the US’s most important historical landmarks. Fort McHenry famously repelled a massive barrage and attack from the British Navy in the War of 1812, and inspired what would later become the national anthem. An ugly bridge would detract from its character, and a beautiful one would compete for it. So they took the high road by building a low road and decided to go underneath the harbor instead. Rather than bore a tunnel through the soil and rock below like the Channel Tunnel, the entire thing was prefabricated in sections and installed from the water surface above - a construction technique called immersed tube tunneling. This seems kind of simple...
4 weeks ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Blog - Practical Engineering

How Sewage Recycling Works

[Note that this article is a transcript of the video embedded above.] Wichita Falls, Texas, went through the worst drought in its history in 2011 and 2012. For two years in a row, the area saw its average annual rainfall roughly cut in half, decimating the levels in the three reservoirs used for the city’s water supply. Looking ahead, the city realized that if the hot, dry weather continued, they would be completely out of water by 2015. Three years sounds like a long runway, but when it comes to major public infrastructure projects, it might as well be overnight. Between permitting, funding, design, and construction, three years barely gets you to the starting line. So the city started looking for other options. And they realized there was one source of water nearby that was just being wasted - millions of gallons per day just being flushed down the Wichita River. I’m sure you can guess where I’m going with this. It was the effluent from their sewage treatment plant. The city asked the state regulators if they could try something that had never been done before at such a scale: take the discharge pipe from the wastewater treatment plant and run it directly into the purification plant that produces most of the city’s drinking water. And the state said no. So they did some more research and testing and asked again. By then, the situation had become an emergency. This time, the state said yes. And what happened next would completely change the way cities think about water. I’m Grady and this is Practical Engineering. You know what they say, wastewater happens. It wasn’t that long ago that raw sewage was simply routed into rivers, streams, or the ocean to be carried away. Thankfully, environmental regulations put a stop to that, or at least significantly curbed the amount of wastewater being set loose without treatment. Wastewater plants across the world do a pretty good job of removing pollutants these days. In fact, I have a series of videos that go through some of the major processes if you want to dive deeper after this. In most places, the permits that allow these plants to discharge set strict limits on contaminants like organics, suspended solids, nutrients, and bacteria. And in most cases, they’re individualized. The permit limits are based on where the effluent will go, how that water body is used, and how well it can tolerate added nutrients or pollutants. And here’s where you start to see the issue with reusing that water: “clean enough” is a sliding scale. Depending on how water is going to be used or what or who it’s going to interact with, our standards for cleanliness vary. If you have a dog, you probably know this. They should drink clean water, but a few sips of a mud puddle in a dirty street, and they’re usually just fine. For you, that might be a trip to the hospital. Natural systems can tolerate a pretty wide range of water quality, but when it comes to drinking water for humans, it should be VERY clean. So the easiest way to recycle treated wastewater is to use it in ways that don’t involve people. That idea’s been around for a while. A lot of wastewater treatment plants apply effluent to land as a disposal method, avoiding the need for discharge to a natural water body. Water soaks into the ground, kind of like a giant septic system. But that comes with some challenges. It only works if you’ve got a lot of land with no public access, and a way to keep the spray from drifting into neighboring properties. Easy at a small scale, but for larger plants, it just isn’t practical engineering. Plus, the only benefits a utility gets from the effluent are some groundwater recharge and maybe a few hay harvests per season. So, why not send the effluent to someone else who can actually put it to beneficial use? If only it were that simple. As soon as a utility starts supplying water to someone else, things get complicated because you lose a lot of control over how the effluent is used. Once it's out of your hands, so to speak, it’s a lot harder to make sure it doesn’t end up somewhere it shouldn’t, like someone’s mouth. So, naturally, the permitting requirements become stricter. Treatment processes get more complicated and expensive. You need regular monitoring, sampling, and laboratory testing. In many places in the world, reclaimed water runs in purple pipes so that someone doesn’t inadvertently connect to the lines thinking they’re potable water. In many cases, you need an agreement in place with the end user, making sure they’re putting up signs, fences, and other means of keeping people from drinking the water. And then you need to plan for emergencies - what to do if a pipe breaks, if the effluent quality falls below the standards, or if a cross-connection is made accidentally. It’s a lot of work - time, effort, and cost - to do it safely and follow the rules. And those costs have to be weighed against the savings that reusing water creates. In places that get a lot of rain or snow, it’s usually not worth it. But in many US states, particularly those in the southwest, this is a major strategy to reduce the demand on fresh water supplies. Think about all the things we use water for where its cleanliness isn’t that important. Irrigation is a big one - crops, pastures, parks, highway landscaping, cemeteries - but that’s not all. Power plants use huge amounts of water for cooling. Street sweeping, dust control. In nearly the entire developed world, we use drinking-quality water to flush toilets! You can see where there might be cases where it makes good sense to reclaim wastewater, and despite all the extra challenges, its use is fairly widespread. One of the first plants was built in 1926 at Grand Canyon Village which supplied reclaimed water to a power plant and for use in steam locomotives. Today, these systems can be massive, with miles and miles of purple pipes run entirely separate from the freshwater piping. I’ve talked about this a bit on the channel before. I used to live near a pair of water towers in San Antonio that were at two different heights above ground. That just didn’t make any sense until I realized they weren’t connected; one of them was for the reclaimed water system that didn’t need as much pressure in the lines. Places like Phoenix, Austin, San Antonio, Orange County, Irvine, and Tampa all have major water reclamation programs. And it’s not just a US thing. Abu Dhabi, Beijing, and Tel Aviv all have infrastructure to make beneficial use of treated municipal wastewater, just to name a few. Because of the extra treatment and requirements, many places put reclaimed water in categories based on how it gets used. The higher the risk of human contact, the tighter the pollutant limits get. For example, if a utility is just selling effluent to farmers, ranchers, or for use in construction, exposure to the public is minimal. Disinfecting the effluent with UV or chlorine may be enough to meet requirements. And often that’s something that can be added pretty simply to an existing plant. But many reclaimed water users are things like golf courses, schoolyards, sports fields, and industrial cooling towers, where people are more likely to be exposed. In those cases, you often need a sewage plant specifically designed for the purpose or at least major upgrades to include what the pros call tertiary treatment processes - ways to target pollutants we usually don’t worry about and improve the removal rates of the ones we do. These can include filters to remove suspended solids, chemicals that bind to nutrients, and stronger disinfection to more effectively kill pathogens. This creates a conundrum, though. In many cases, we treat wastewater effluent to higher standards than we normally would in order to reclaim it, but only for nonpotable uses, with strict regulations about human contact. But if it’s not being reclaimed, the quality standards are lower, and we send it downstream. If you know how rivers work, you probably see the inconsistency here. Because in many places, down the river, is the next city with its water purification plant whose intakes, in effect, reclaim that treated sewage from the people upstream. This isn’t theoretical - it’s just the reality of how humans interact with the water cycle. We’ve struggled with the problems it causes for ages. In 1906, Missouri sued Illinois in the Supreme Court when Chicago reversed their river, redirecting its water (and all the city’s sewage) toward the Mississippi River. If you live in Houston, I hate to break it to you, but a big portion of your drinking water comes from the flushes and showers in Dallas. There have been times when wastewater effluent makes up half of the flow in the Trinity River. But the question is: if they can do it, why can’t we? If our wastewater effluent is already being reused by the city downstream to purify into drinking water, why can’t we just keep the effluent for ourselves and do the same thing? And the answer again is complicated. It starts with what’s called an environmental buffer. Natural systems offer time to detect failures, dilute contaminants, and even clean the water a bit—sunlight disinfects, bacteria consume organic matter. That’s the big difference in one city, in effect, reclaiming water from another upstream. There’s nature in between. So a lot of water reclamation systems, called indirect potable reuse, do the same thing: you discharge the effluent into a river, lake, or aquifer, then pull it out again later for purification into drinking water. By then, it’s been diluted and treated somewhat by the natural systems. Direct potable reuse projects skip the buffer and pipe straight from one treatment plant to the next. There’s no margin for error provided by the environmental buffer. So, you have to engineer those same protections into the system: real-time monitoring, alarms, automatic shutdowns, and redundant treatment processes. Then there’s the issue of contaminants of emerging concern: pharmaceuticals, PFAS [P-FAS], personal care products - things that pass through people or households and end up in wastewater in tiny amounts. Individually, they’re in parts per billion or trillion. But when you close the loop and reuse water over and over, those trace compounds can accumulate. Many of these aren’t regulated because they’ve never reached concentrations high enough to cause concern, or there just isn’t enough knowledge about their effects yet. That’s slowly changing, and it presents a big challenge for reuse projects. They can be dealt with at the source by regulating consumer products, encouraging proper disposal of pharmaceuticals (instead of flushing them), and imposing pretreatment requirements for industries. It can also happen at the treatment plant with advanced technologies like reverse osmosis, activated carbon, advanced oxidation, and bio-reactors that break down micro-contaminants. Either way, it adds cost and complexity to a reuse program. But really, the biggest problem with wastewater reuse isn’t technical - it’s psychological. The so-called “yuck factor” is real. People don’t want to drink sewage. Indirect reuse projects have a big benefit here. With some nature in between, it’s not just treated wastewater; it’s a natural source of water with treated wastewater in it. It’s kind of a story we tell ourselves, but we lose the benefit of that with direct reuse: Knowing your water came from a toilet—even if it’s been purified beyond drinking water standards—makes people uneasy. You might not think about it, but turning the tap on, putting that water in a glass, and taking a drink is an enormous act of trust. Most of us don’t understand water treatment and how it happens at a city scale. So that trust that it’s safe to drink largely comes from seeing other people do it and past experience of doing it over and over and not getting sick. The issue is that, when you add one bit of knowledge to that relative void of understanding - this water came directly from sewage - it throws that trust off balance. It forces you not to rely not on past experience but on the people and processes in place, most of which you don’t understand deeply, and generally none of which you can actually see. It’s not as simple as just revulsion. It shakes up your entire belief system. And there’s no engineering fix for that. Especially for direct potable reuse, public trust is critical. So on top of the infrastructure, these programs also involve major public awareness campaigns. Utilities have to put themselves out there, gather feedback, respond to questions, be empathetic to a community’s values, and try to help people understand how we ensure water quality, no matter what the source is. But also, like I said, a lot of that trust comes from past experience. Not everyone can be an environmental engineer or licensed treatment plant operator. And let’s be honest - utilities can’t reach everyone. How many public meetings about water treatment have you ever attended? So, in many places, that trust is just going to have to be built by doing it right, doing it well, and doing it for a long time. But, someone has to be first. In the U.S., at least on the city scale, that drinking water guinea pig was Wichita Falls. They launched a massive outreach campaign, invited experts for tours, and worked to build public support. But at the end of the day, they didn’t really have a choice. The drought really was that severe. They spent nearly four years under intense water restrictions. Usage dropped to a third of normal demand, but it still wasn’t enough. So, in collaboration with state regulators, they designed an emergency direct potable reuse system. They literally helped write the rules as they went, since no one had ever done it before. After two months of testing and verification, they turned on the system in July 2014. It made national headlines. The project ran for exactly one year. Then, in 2015, a massive flood ended the drought and filled the reservoirs in just three weeks. The emergency system was always meant to be temporary. Water essentially went through three treatment plants: the wastewater plant, a reverse osmosis plant, and then the regular water purification plant. That’s a lot of treatment, which is a lot of expense, but they needed to have the failsafe and redundancy to get the state on board with the project. The pipe connecting the two plants was above ground and later repurposed for the city’s indirect potable reuse system, which is still in use today. In the end, they reclaimed nearly two billion gallons of wastewater as drinking water. And they did it with 100% compliance with the standards. But more importantly, they showed that it could be done, essentially unlocking a new branch on the skill tree of engineering that other cities can emulate and build on.

18 hours ago 4 votes
Why are Smokestacks So Tall?

[Note that this article is a transcript of the video embedded above.] “The big black stacks of the Illium Works of the Federal Apparatus Corporation spewed acid fumes and soot over the hundreds of men and women who were lined up before the red-brick employment office.” That’s the first line of one of my favorite short stories, written by Kurt Vonnegut in 1955. It paints a picture of a dystopian future that, thankfully, didn’t really come to be, in part because of those stacks. In some ways, air pollution is kind of a part of life. I’d love to live in a world where the systems, materials and processes that make my life possible didn’t come with any emissions, but it’s just not the case... From the time that humans discovered fire, we’ve been methodically calculating the benefits of warmth, comfort, and cooking against the disadvantages of carbon monoxide exposure and particulate matter less than 2.5 microns in diameter… Maybe not in that exact framework, but basically, since the dawn of humanity, we’ve had to deal with smoke one way or another. Since, we can’t accomplish much without putting unwanted stuff into the air, the next best thing is to manage how and where it happens to try and minimize its impact on public health. Of course, any time you have a balancing act between technical issues, the engineers get involved, not so much to help decide where to draw the line, but to develop systems that can stay below it. And that’s where the smokestack comes in. Its function probably seems obvious; you might have a chimney in your house that does a similar job. But I want to give you a peek behind the curtain into the Illium Works of the Federal Apparatus Corporation of today and show you what goes into engineering one of these stacks at a large industrial facility. I’m Grady, and this is Practical Engineering. We put a lot of bad stuff in the air, and in a lot of different ways. There are roughly 200 regulated hazardous air pollutants in the United States, many with names I can barely pronounce. In many cases, the industries that would release these contaminants are required to deal with them at the source. A wide range of control technologies are put into place to clean dangerous pollutants from the air before it’s released into the environment. One example is coal-fired power plants. Coal, in particular, releases a plethora of pollutants when combusted, so, in many countries, modern plants are required to install control systems. Catalytic reactors remove nitrous oxides. Electrostatic precipitators collect particulates. Scrubbers use lime (the mineral, not the fruit) to strip away sulfur dioxide. And I could go on. In some cases, emission control systems can represent a significant proportion of the costs involved in building and operating a plant. But these primary emission controls aren’t always feasible for every pollutant, at least not for 100 percent removal. There’s a very old saying that “the solution to pollution is dilution.” It’s not really true on a global scale. Case in point: There’s no way to dilute the concentration of carbon dioxide in the atmosphere, or rather, it’s already as dilute as it’s going to get. But, it can be true on a local scale. Many pollutants that affect human health and the environment are short-lived; they chemically react or decompose in the atmosphere over time instead of accumulating indefinitely. And, for a lot of chemicals, there are concentration thresholds below which the consequences on human health are negligible. In those cases, dilution, or really dispersion, is a sound strategy to reduce their negative impacts, and so, in some cases, that’s what we do, particularly at major point sources like factories and power plants. One of the tricks to dispersion is that many plumes are naturally buoyant. Naturally, I’m going to use my pizza oven to demonstrate this. Not all, but most pollutants we care about are a result of combustion; burning stuff up. So the plume is usually hot. We know hot air is less dense, so it naturally rises. And the hotter it is, the faster that happens. You can see when I first start the fire, there’s not much air movement. But as the fire gets hotter in the oven, the plume speeds up, ultimately rising higher into the air. That’s the whole goal: get the plume high above populated areas where the pollutants can be dispersed to a minimally-harmful concentration. It sounds like a simple solution - just run our boilers and furnaces super hot to get enough buoyancy for the combustion products to disperse. The problem with the solution is that the whole reason we combust things is usually to recover the heat. So if you’re sending a lot of that heat out of the system, just because it makes the plume disperse better, you’re losing thermodynamic efficiency. It’s wasteful. That’s where the stack comes in. Let me put mine on and show you what I mean. I took some readings with the anemometers with the stack on and off. The airspeed with the stack on was around double with it off. About a meter per second compared with two. But it’s a little tougher to understand why. It’s intuitive that as you move higher in a column of fluid, the pressure goes down (since there’s less weight of the fluid above). The deeper you dive in a pool, the more pressure you feel. The higher you fly in a plane or climb a mountain, the lower the pressure. The slope of that line is proportional to a fluid’s density. You don’t feel much of a pressure difference climbing a set of stairs because air isn’t very dense. If you travel the same distance in water, you’ll definitely notice the difference. So let’s look at two columns of fluid. One is the ambient air and the other is the air inside a stack. Since it’s hotter, the air inside the stack is less dense. Both columns start at the same pressure at the bottom, but the higher you go, the more the pressure diverges. It’s kind of like deep sea diving in reverse. In water, the deeper you go into the dense water, the greater the pressure you feel. In a stack, the higher you are in a column of hot air, the more buoyant you feel compared to the outside air. This is the genius of a smoke stack. It creates this difference in pressure between the inside and outside that drives greater airflow for a given temperature. Here’s the basic equation for a stack effect. I like to look at equations like this divided into what we can control and what we can’t. We don’t get to adjust the atmospheric pressure, the outside temperature, and this is just a constant. But you can see, with a stack, an engineer now has two knobs to turn: the temperature of the gas inside and the height of the stack. I did my best to keep the temperature constant in my pizza oven and took some airspeed readings. First with no stack. Then with the stock stack. Then with a megastack. By the way, this melted my anemometer; should have seen that coming. Thankfully, I got the measurements before it melted. My megastack nearly doubled the airspeed again at around three-and-a-half meters per second versus the two with just the stack that came with the oven. There’s something really satisfying about this stack effect to me. No moving parts or fancy machinery. Just put a longer pipe and you’ve fundamentally changed the physics of the whole situation. And it’s a really important tool in the environmental engineer’s toolbox to increase airflow upward, allowing contaminants to flow higher into the atmosphere where they can disperse. But this is not particularly revolutionary… unless you’re talking about the Industrial Revolution. When you look at all the pictures of the factories in the 19th century, those stacks weren’t there to improve air quality, if you can believe it. The increased airflow generated by a stack just created more efficient combustion for the boilers and furnaces. Any benefits to air quality in the cities were secondary. With the advent of diesel and electric motors, we could use forced drafts, reducing the need for a tall stack to increase airflow. That was kind of the decline of the forests of industrial chimneys that marked the landscape in the 19th century. But they’re obviously not all gone, because that secondary benefit of air quality turned into the primary benefit as environmental rules about air pollution became stricter. Of course, there are some practical limits that aren’t taken into account by that equation I showed. The plume cools down as it moves up the stack to the outside, so its density isn’t constant all the way up. I let my fire die down a bit so it wouldn’t melt the thermometer (learned my lesson), and then took readings inside the oven and at the top of the stack. You can see my pizza oven flue gas is around 210 degrees at the top of the mega-stack, but it’s roughly 250 inside the oven. After the success of the mega stack on my pizza oven, I tried the super-mega stack with not much improvement in airflow: about 4 meters per second. The warm air just got too cool by the time it reached the top. And I suspect that frictional drag in the longer pipe also contributed to that as well. So, really, depending on how insulating your stack is, our graph of height versus pressure actually ends up looking like this. And this can be its own engineering challenge. Maybe you’ve gotten back drafts in your fireplace at home because the fire wasn’t big or hot enough to create that large difference in pressure. You can see there are a lot of factors at play in designing these structures, but so far, all we’ve done is get the air moving faster. But that’s not the end goal. The purpose is to reduce the concentration of pollutants that we’re exposed to. So engineers also have to consider what happens to the plume once it leaves the stack, and that’s where things really get complicated. In the US, we have National Ambient Air Quality Standards that regulate six so-called “criteria” pollutants that are relatively widespread: carbon monoxide, lead, nitrogen dioxide, ozone, particulates, and sulfur dioxide. We have hard limits on all these compounds with the intention that they are met at all times, in all locations, under all conditions. Unfortunately, that’s not always the case. You can go on EPA’s website and look at the so-called “non-attainment” areas for the various pollutants. But we do strive to meet the standards through a list of measures that is too long to go into here. And that is not an easy thing to do. Not every source of pollution comes out of a big stationary smokestack where it’s easy to measure and control. Cars, buses, planes, trucks, trains, and even rockets create lots of contaminants that vary by location, season, and time of day. And there are natural processes that contribute as well. Forests and soil microbes release volatile organic compounds that can lead to ozone formation. Volcanic eruptions and wildfires release carbon monoxide and sulfur dioxide. Even dust storms put particulates in the air that can travel across continents. And hopefully you’re seeing the challenge of designing a smoke stack. The primary controls like scrubbers and precipitators get most of the pollutants out, and hopefully all of the ones that can’t be dispersed. But what’s left over and released has to avoid pushing concentrations above the standards. That design has to work within the very complicated and varying context of air chemistry and atmospheric conditions that a designer has no control over. Let me show you a demo. I have a little fog generator set up in my garage with a small fan simulating the wind. This isn’t a great example because the airflow from the fan is pretty turbulent compared to natural winds. You occasionally get some fog at the surface, but you can see my plume mainly stays above the surface, dispersing as it moves with the wind. But watch what happens when I put a building downstream. The structure changes the airflow, creating a downwash effect and pulling my plume with it. Much more frequently you see the fog at the ground level downstream. And this is just a tiny example of how complex the behavior of these plumes can be. Luckily, there’s a whole field of engineering to characterize it. There are really just two major transport processes for air pollution. Advection describes how contaminants are carried along by the wind. Diffusion describes how those contaminants spread out through turbulence. Gravity also affects air pollution, but it doesn’t have a significant effect except on heavier-than-air particulates. With some math and simplifications of those two processes, you can do a reasonable job predicting the concentration of any pollutant at any point in space as it moves and disperses through the air. Here’s the basic equation for that, and if you’ll join me for the next 2 hours, we’ll derive this and learn the meaning of each term… Actually, it might take longer than that, so let’s just look at a graphic. You can see that as the plume gets carried along by the wind, it spreads out in what’s basically a bell curve, or gaussian distribution, in the planes perpendicular to the wind direction. But even that is a bit too simplified to make any good decisions with, especially when the consequences of getting it wrong are to public health. A big reason for that is atmospheric stability. And this can make things even more complicated, but I want to explain the basics, because the effect on plumes of gas can be really dramatic. You probably know that air expands as it moves upward; there’s less pressure as you go up because there is less air above you. And as any gas expands, it cools down. So there’s this relationship between height and temperature we call the adiabatic lapse rate. It’s about 10 degrees Celsius for every kilometer up or about 28 Fahrenheit for every mile up. But the actual atmosphere doesn’t always follow this relationship. For example, rising air parcels can cool more slowly than the surrounding air. This makes them warmer and less dense, so they keep rising, promoting vertical motion in a positive feedback loop called atmospheric instability. You can even get a temperature inversion where you have cooler air below warmer air, something that can happen in the early morning when the ground is cold. And as the environmental lapse rate varies from the adiabatic lapse rate, the plumes from stacks change. In stable conditions, you usually get a coning plume, similar to what our gaussian distribution from before predicts. In unstable conditions, you get a lot of mixing, which leads to a looping plume. And things really get weird for temperature inversions because they basically act like lids for vertical movement. You can get a fanning plume that rises to a point, but then only spreads horizontally. You can also get a trapping plume, where the air gets stuck between two inversions. You can have a lofting plume, where the air is above the inversion with stable conditions below and unstable conditions above. And worst of all, you can have a fumigating plume when there are unstable conditions below an inversion, trapping and mixing the plume toward the ground surface. And if you pay attention to smokestacks, fires, and other types of emissions, you can identify these different types of plumes pretty easily. Hopefully you’re seeing now how much goes into this. Engineers have to keep track of the advection and diffusion, wind speed and direction, atmospheric stability, the effects of terrain and buildings on all those factors, plus the pre-existing concentrations of all the criteria pollutants from other sources, which vary in time and place. All that to demonstrate that your new source of air pollution is not going to push the concentrations at any place, at any time, under any conditions, beyond what the standards allow. That’s a tall order, even for someone who loves gaussian distributions. And often the answer to that tall order is an even taller smokestack. But to make sure, we use software. The EPA has developed models that can take all these factors into account to simulate, essentially, what would happen if you put a new source of pollution into the world and at what height. So why are smokestacks so tall? I hope you’ll agree with me that it turns out to be a pretty complicated question. And it’s important, right? These stacks are expensive to build and maintain. Those costs trickle down to us through the costs of the products and services we buy. They have a generally negative visual impact on the landscape. And they have a lot of other engineering challenges too, like resonance in the wind. And on the other hand, we have public health, arguably one of the most critical design criteria that can exist for an engineer. It’s really important to get this right. I think our air quality regulations do a lot to make sure we strike a good balance here. There are even rules limiting how much credit you can get for building a stack higher for greater dispersion to make sure that we’re not using excessively tall stacks in lieu of more effective, but often more expensive, emission controls and strategies. In a perfect world, none of the materials or industrial processes that we rely on would generate concentrated plumes of hazardous gases. We don’t live in that perfect world, but we are pretty fortunate that, at least in many places on Earth, air quality is something we don’t have to think too much about. And to thank for it, we have a relatively small industry of environmental professionals who do think about it, a whole lot. You know, for a lot of people, this is their whole career; what they ponder from 9-5 every day. Something most of us would rather keep out of mind, they face it head-on, developing engineering theories, professional consensus, sensible regulations, modeling software, and more - just so we can breathe easy.

2 weeks ago 11 votes
When Abandoned Mines Collapse

[Note that this article is a transcript of the video embedded above.] In December of 2024, a huge sinkhole opened up on I-80 near Wharton, New Jersey, creating massive traffic delays as crews worked to figure out what happened and get it fixed. Since then, it happened again in February 2025 and then again in March. Each time, the highway had to be shut down, creating a nightmare for commuters who had to find alternate routes. And it’s a nightmare for the DOT, too, trying to make sure this highway is safe to drive on despite it literally collapsing into the earth. From what we know so far, this is not a natural phenomenon, but one that’s human-made. It looks like all these issues were set in motion more than a century ago when the area had numerous underground iron mines. This is a really complex issue that causes problems around the world, and I built a little model mine in my garage to show you why it’s such a big deal. I’m Grady and this is Practical Engineering. We’ve been extracting material and minerals from the earth since way before anyone was writing things down. It’s probably safe to say that things started at the surface. You notice something shiny or differently colored on the side of a hill or cliff and you take it out. Over time, we built up knowledge about what materials were valuable, where they existed, and how to efficiently extract them from the earth. But, of course, there’s only so much earth at the surface. Eventually, you have to start digging. Maybe you follow a vein of gold, silver, copper, coal or sulfur down below the surface. And things start to get more complicated because now you’re in a hole. And holes are kind of dangerous. They’re dark, they fill with water, they can collapse, and they collect dangerous gases. So, in many cases, even today, it makes sense to remove the overburden - the soil and rock above the mineral or material you’re after. Mining on the surface has a lot of advantages when it comes to cost and safety. But there are situations where surface mining isn’t practical. Removing overburden is expensive, and it gets more expensive the deeper you go. It also has environmental impacts like habitat destruction and pollution of air and water. So, as technology, safety, and our understanding of soil and rock mechanics grew, so did our ability to go straight to the source and extract minerals underground. One of the major materials that drove the move to underground mining was coal. It’s usually found in horizontal formations called seams, that formed when vast volumes of paleozoic plants were buried and then crushed and heated over geologic time. At the start of the Industrial Revolution, coal quickly became a primary source of energy for steam engines, steel refining, and electricity generation. Those coal seams vary in thickness, and they vary in depth below the surface too, so many early coal mines were underground. In the early days of underground mining, there was not a lot of foresight. Some might argue that’s still true, but it was a lot more so a couple hundred years ago. Coal mining companies weren’t creating detailed maps of their mines, and even if they did, there was no central archive to send them to. And they just weren’t that concerned about the long-term stability of the mines once the resources had been extracted. All that mattered was getting coal out of the ground. Mining companies came and went, dissolved or were acquired, and over time, a lot of information about where mines existed and their condition was just lost. And even though many mines were in rural areas, far away from major population centers, some weren’t, and some of those rural areas became major population centers without any knowledge about what had happened underneath them decades ago. An issue that confounds the problem of mine subsidence is that in a lot of places, property ownership is split into two pieces: surface rights and mineral rights. And those rights can be owned by different people. So if you’re a homeowner, you may own the surface rights to your land, while a company owns the right to drill or mine under your property. That doesn’t give them the right to damage your property, but it does make things more complicated since you don’t always have a say in what’s happening beneath the surface. There are myriad ways to build and operate underground mines, but especially for soft rock mining, like coal, the predominant method for decades was called “room and pillar”. This is exactly what it sounds like. You excavate the ore, bringing material to the surface. But you leave columns to support the roof. The size, shape, and spacing of columns are dictated by the strength of the material. This is really important because a mine like this has major fixed costs: exploration, planning, access, ventilation, and haulage. It’s important to extract as much as possible, and every column you leave supporting the roof is valuable material you can’t recover. So, there’s often not a lot of margin in these pillars. They’re as small as the company thought they could get away with before they were finished mining. I built a little room and pillar mine in my garage. I’ll be the first to admit that this little model is not a rigorous reproduction of an actual geologic formation. My coal seam is just made of cardboard, and the bright colors are just for fun. But, I’m hoping this can help illustrate the challenges associated with this type of mine. I’ve got a little rainfall simulator set up, because water plays a big role in these processes. This first rainfall isn’t necessarily representative of real life, since it’s really just compacting the loose sand. But it does give a nice image of how subsidence works in general. You can see the surface of the ground sinking as the sand compacts into place. But you can also see that as the water reaches the mine, things start to deform. In a real mine, this is true, too. Stresses in the surrounding soil and rock redistribute over time from long-term movements, relaxation of stresses that were already built up in the materials before extraction, and from water. I ran this model for an entire day, turning the rainfall on and off to simulate a somewhat natural progression of time in the subsurface. By the end of the day, the mine hadn’t collapsed, but it was looking a great deal less stable than when it started. And that’s one big thing you can learn from this model - in a lot of cases, these issues aren’t linearly progressive. They can happen in fits and starts, like this small leak in the roof of the mine. You get a little bit of erosion of soil, but eventually, enough sand built up that it kind of healed itself, and, for a while, you can’t see any evidence of any of it at the surface. The geology essentially absorbed the sinkhole by redistributing materials and stresses so there’s no obvious sign at the surface that anything wayward is happening below. In the US, there were very few regulations on mining until the late 19th century, and even those focused primarily on the safety of the workers. There just wasn’t that much concern about long-term stability. So as soon as material was extracted, mines were abandoned. The already iffy columns were just left alone, and no one wasted resources on additional supports or shoring. They just walked away. One thing that happens when mines are abandoned is that they flood. Without the need to work inside, the companies stop pumping out the water. I can simulate this on my model by just plugging up the drain. In a real soft rock mine, there can be minerals like gypsum and limestone that are soluble in water. Repeated cycles of drying and wetting can slowly dissolve them away. Water can also soften certain materials and soils, reducing their mechanical strength to withstand heavy loads, just like my cardboard model. And then, of course, water simply causes erosion. It can literally carry soil particles with it, again, causing voids and redistribution of stresses in the subsurface. This is footage from an old video I did demonstrating how sinkholes can form. The ways that mine subsidence propagates to the surface can vary a lot, based on the geology and depth of the mine. For collapses near the surface, you often see well-defined sinkholes where the soil directly above the mine simply falls into the void. And this is usually a sudden phenomenon. I flooded and drained my little mine a few times to demonstrate this. Accidentally flooded my little town a few times in the process, but that’s okay. You can see in my model, after flooding the mine and draining it down, there was a partial failure in the roof and a pile of sand toward the back caved in. And on the surface, you see just a small sinkhole. In 2024, a huge hole opened right in the center of a sports complex in Alton, Illinois. It was quickly determined that part of an active underground aggregate mine below the park had collapsed, leading to the sinkhole. It’s pretty characteristic of these issues. You don’t know where they’re going to happen, and you don’t know how the surface soils are going to react to what’s happening underneath. Subsidence can also look like a generalized and broader sinking and settling over a large area. You can see in my model that most of the surface still looks pretty flat, despite the fact that it started here and is now down here as the mine supports have softened and deformed. This can also be the case when mines are deeper in the ground. Even if the collapse is sudden, the subsidence is less dramatic because the geology can shift and move to redistribute the stresses. And the subsidence happens more slowly as the overburden settles into a new configuration. In all cases, the subsidence can extend laterally from the mine, so impacted areas aren’t always directly above. The deeper the mine, the wider the subsidence can be. I ran my little mine demo for quite a few cycles of wet and dry just to see how bad things would get. And I admit I used a little percussion at the end to speed things along. Let’s say this is a simulation of an earthquake on an abandoned mine. [Beat] You can see that by the end of it, this thing has basically collapsed. And take a look at the surface now. You have some defined sinkholes for sure. And you also have just generalized subsidence - sloped and wavy areas that were once level. And you can imagine the problems this can cause. Structures can easily be damaged by differential settlement. Pipes break. Foundations shift and crack. Even water can drain differently than before, causing ponding and even changing the course of rivers and streams for large areas. And even if there are no structures, subsidence can ruin high-value farm land, mess up roads, disrupt habitat, and more. In many cases, the company that caused all the damage is long gone. Essentially they set a ticking time bomb deep below the ground with no one knowing if or when it would go off. There’s no one to hold accountable for it, and there’s very little recourse for property owners. Typical property insurance specifically excludes damage from mine subsidence. So, in some places where this is a real threat, government-subsidized insurance programs have been put in place. Eight states in the US, those where coal mining was most extensive, have insurance pools set up. In a few of those states, it is a requirement in order to own property. The federal government in the US also collects a fee from coal mines that goes into a fund that helps cover reclamation costs of mines abandoned before 1977 when the law went into effect. That federal mining act also required modern mines to use methods to prevent subsidence, or control its effects, because this isn’t just a problem with historic abandoned mines. Some modern underground soft rock mining doesn’t use the room and pillar method but instead a process called longwall mining. Like everything in mining, there are multiple ways to do it. But here’s the basic method: Hydraulic jacks support the roof of the mine in a long line. A machine called a shearer travels along the face of the seam with cutting drums. The cut coal falls onto a conveyor and is transported to the surface. The roof supports move forward into the newly created cavity, intentionally allowing the roof behind them to collapse. It’s an incredibly efficient form of mining, and you get to take the whole seam, rather than leaving pillars behind to support the roof. But, obviously, in this method, subsidence at the surface is practically inevitable. Minimizing the harm that subsidence creates starts just by predicting its extent and magnitude. And, just looking at my model, I think you can guess that this isn’t a very easy problem to solve. Engineers use a mix of empirical information, like data from similar past mining operations, geotechnical data, simplified relationships, and in some cases detailed numerical modeling that accounts for geologic and water movement over time. But you don’t just have to predict it. You also have to measure it to see if your predictions were right. So mining companies use instruments like inclinometers and extensometers above underground mines to track how they affect the surface. I have a whole video about that kind of instrumentation if you want to learn more after this. The last part of that is reclamation - to repair or mitigate the damage that’s been done. And this can vary so much depending on where the mine is, what’s above it, and how much subsidence occurs. It can be as simple as filling and grading land that has subsided all the way to extensive structural retrofits to buildings above a mine before extraction even starts. Sinkholes are often repaired by backfilling with layers of different-sized materials, from large at the bottom to small at top. That creates a filter to keep soil from continuing to erode downward into the void. Larger voids can be filled with grout or even polyurethane foam to stabilize the ground above, reducing the chance for a future collapse. I know coal - and mining in general - can be a sensitive topic. Most of us don’t have a lot of exposure to everything that goes into obtaining the raw resources that make modern life possible. And the things we do see and hear are usually bad things like negative environmental impacts or subsidence. But I really think the story of subsidence isn’t just one of “mining is bad” but really “mining used to be bad, and now it’s a lot better, but there are still challenges to overcome.” I guess that’s the story of so many things in engineering - addressing the difficulties we used to just ignore. And this video isn’t meant to fearmonger. This is a real issue that causes real damages today, but it’s also an issue that a lot of people put a great deal of thought, effort, and ultimately resources into so that we can strike a balance between protection against damage to property and the environment and obtaining the resources that we all depend on.

a month ago 17 votes
When Kitty Litter Caused a Nuclear Catastrophe

[Note that this article is a transcript of the video embedded above.] Late in the night of Valentine’s Day 2014, air monitors at an underground nuclear waste repository outside Carlsbad, New Mexico, detected the release of radioactive elements, including americium and plutonium, into the environment. Ventilation fans automatically switched on to exhaust contaminated air up through a shaft, through filters, and out to the environment above ground. When filters were checked the following morning, technicians found that they contained transuranic materials, highly radioactive particles that are not naturally found on Earth. In other words, a container of nuclear waste in the repository had been breached. The site was shut down and employees sent home, but it would be more than a year before the bizarre cause of the incident was released. I’m Grady, and this is Practical Engineering. The dangers of the development of nuclear weapons aren’t limited to mushroom clouds and doomsday scenarios. The process of creating the exotic, transuranic materials necessary to build thermonuclear weapons creates a lot of waste, which itself is uniquely hazardous. Clothes, tools, and materials used in the process may stay dangerously radioactive for thousands of years. So, a huge part of working with nuclear materials is planning how to manage waste. I try not to make predictions about the future, but I think it’s safe to say that the world will probably be a bit different in 10,000 years. More likely, it will be unimaginably different. So, ethical disposal of nuclear waste means not only protecting ourselves but also protecting whoever is here long after we are ancient memories or even forgotten altogether. It’s an engineering challenge pretty much unlike any other, and it demands some creative solutions. The Waste Isolation Pilot Plant, or WIPP, was built in the 1980s in the desert outside Carlsbad, New Mexico, a site selected for a very specific reason: salt. One of the most critical jobs for long-term permanent storage is to keep radioactive waste from entering groundwater and dispersing into the environment. So, WIPP was built inside an enormous and geologically stable formation of salt, roughly 2000 feet or 600 meters below the surface. The presence of ancient salt is an indication that groundwater doesn’t reach this area since the water would dissolve it. And the salt has another beneficial behavior: it’s mobile. Over time, the walls and ceilings of mined-out salt tend to act in a plastic manner, slowly creeping inwards to fill the void. This is ideal in the long term because it will ultimately entomb the waste at WIPP in a permanent manner. It does make things more complicated in the meantime, though, since they have to constantly work to keep the underground open during operation. This process, called “ground control,” involves techniques like drilling and installing roof bolts in epoxy to hold up the ceilings. I have an older video on that process if you want to learn more after this. The challenge in this case is that, eventually, we want the roof bolts to fail, allowing a gentle collapse of salt to fill the void because it does an important job. The salt, and just being deep underground in general, acts to shield the environment from radiation. In fact, a deep salt mine is such a well-shielded area that there’s an experimental laboratory located in WIPP across on the other side of the underground from the waste panels where various universities do cutting-edge physics experiments precisely because of the low radiation levels. The thousands of feet of material above the lab shield it from cosmic and solar radiation, and the salt has much lower levels of inherent radioactivity than other kinds of rock. Imagine that: a low-radiation lab inside a nuclear waste dump. Four shafts extend from the surface into the underground repository for moving people, waste, and air into and out of the facility. Room-and-pillar mining is used to excavate horizontal drifts or panels where waste is stored. Investigators were eventually able to re-enter the repository and search for the cause of the breach. They found the source in Panel 7, Room 7, the area of active disposal at the time. Pressure and heat had burst a drum, starting a fire, damaging nearby containers, and ultimately releasing radioactive materials into the air. On activation of the radiation alarm, the underground ventilation system automatically switched to filtration mode, sending air through massive HEPA filters. Interestingly, although they’re a pretty common consumer good now, High Efficiency Particulate Air, or HEPA, filters actually got their start during the Manhattan Project specifically to filter radionuclides from the air. The ventilation system at WIPP performed well, although there was some leakage past the filters, allowing a small percentage of radioactive material to bypass the filters and release directly into the atmosphere at the surface. 21 workers tested positive for low-level exposure to radioactive contamination but, thankfully, were unharmed. Both WIPP and independent testing organizations confirmed that detected levels were very low, the particles did not spread far, and were extremely unlikely to result in radiation-related health effects to workers or the public. Thankfully, the safety features at the facility worked, but it would take investigators much longer to understand what went wrong in the first place, and that involved tracing that waste barrel back to its source. It all started at the Los Alamos National Laboratory, one of the labs created as part of the 1940s Manhattan Project that first developed atomic bombs in the desert of New Mexico. The 1970s brought a renewed interest in cleaning up various Department of Energy sites. Los Alamos was tasked with recovering plutonium from residue materials left over from previous wartime and research efforts. That process involved using nitric acid to separate plutonium from uranium. Once plutonium is extracted, you’re left with nitrate solutions that get neutralized or evaporated, creating a solid waste stream that contains residual radioactive isotopes. In 1985, a volume of this waste was placed in a lead-lined 55-gallon drum along with an absorbent to soak up any moisture and put into temporary storage at Los Alamos, where it sat for years. But in the summer of 2011, the Las Conchas wildfire threatened the Los Alamos facility, coming within just a few miles of the storage area. This actual fire lit a metaphorical fire under various officials, and wheels were set into motion to get the transuranic waste safely into a long-term storage facility. In other words, ship it down the road to WIPP. Transporting transuranic wastes on the road from one facility to another is quite an ordeal, even when they’re only going through the New Mexican desert. There are rules preventing the transportation of ignitable, corrosive, or reactive waste, and special casks are required to minimize the risk of radiological release in the unlikely event of a crash. WIPP also had rules about how waste can be packaged in order to be placed for long-term disposal called the Waste Acceptance Criteria, which included limits on free liquids. Los Alamos concluded that barrel didn’t meet the requirements and needed to be repackaged before shipping to WIPP. But, there were concerns about which absorbent to use. Los Alamos used various absorbent materials within waste barrels over the years to minimize the amount of moisture and free liquid inside. Any time you’re mixing nuclear waste with another material, you have to be sure there won’t be any unexpected reactions. The procedure for repackaging nitrate salts required that a superabsorbent polymer be used, similar to the beads I’ve used in some of my demos, but concerns about reactivity led to meetings and investigations about whether it was the right material for the job. Ultimately, Los Alamos and their contractors concluded that the materials were incompatible and decided to make a switch. In May 2012, Los Alamos published a white paper titled “Amount of Zeolite Required to Meet the Constraints Established by the EMRTC Report RF 10-13: Application of LANL Evaporator Nitrate Salts.” In other words, “How much kitty litter should be added to radioactive waste?” The answer was about 1.2 to 1, inorganic zeolite clay to nitrate salt waste, by volume. That guidance was then translated into the actual procedures that technicians would use to repackage the waste in gloveboxes at Los Alamos. But something got lost in translation. As far as investigators could determine, here’s what happened: In a meeting in May 2012, the manager responsible for glovebox operations took personal notes about this switch in materials. Those notes were sent in an email and eventually incorporated into the written procedures: “Ensure an organic absorbent is added to the waste material at a minimum of 1.5 absorbent to 1 part waste ratio.” Did you hear that? The white paper’s requirement to use an inorganic absorbent became “...an organic absorbent” in the procedures. We’ll never know where the confusion came from, but it could have been as simple as mishearing the word in the meeting. Nonetheless, that’s what the procedure became. Contractors at Los Alamos procured a large quantity of Swheat Scoop, an organic, wheat-based cat litter, and started using it to repackage the nitrate salt wastes. Our barrel first packaged in 1985 was repackaged in December 2013 with the new kitty litter. It was tested and certified in January 2014, shipped to WIPP later that month, and placed underground. And then it blew up. The unthinkable had happened; the wrong kind of kitty litter had caused a nuclear disaster. While the nitrates are relatively unreactive with inorganic, mineral-based zeolite kitty litter that should have been used, the organic, carbon-based wheat material could undergo oxidation reactions with nitrate wastes. I think it’s also interesting to note here that the issue is a reaction that was totally unrelated to the presence of transuranic waste. It was a chemical reaction - not a nuclear reaction - that caused the problem. Ultimately, the direct cause of the incident was determined to be “an exothermic reaction of incompatible materials in LANL waste drum 68660 that led to thermal runaway, which resulted in over-pressurization of the drum, breach of the drum, and release of a portion of the drum’s contents (combustible gases, waste, and wheat-based absorbent) into the WIPP underground.” Of course, the root cause is deeper than that and has to do with systemic issues at Los Alamos and how they handled the repackaging of the material. The investigation report identified 12 contributing causes that, while individually did not cause the accident, increased the likelihood or severity of it. These are written in a way that is pretty difficult for a non-DOE expert to parse: take a stab at digesting contributing cause number 5: “Failure of Los Alamos Field Office (NA-LA) and the National Transuranic (TRU) Program/Carlsbad Field Office (CBFO) to ensure that the CCP [that is, the Central Characterization Program] and LANS [that is, that is the contractor, Los Alamos National Security] complied with Resource Conservation and Recovery Act (RCRA) requirements in the WIPP Hazardous Waste Facility Permit (HWFP) and the LANL HWFP, as well as the WIPP Waste Acceptance Criteria (WAC).” Still, as bad as it all seems, it really could have been a lot worse. In a sense, WIPP performed precisely how you’d want it to in such an event, and it’s a really good thing the barrel was in the underground when it burst. Had the same happened at Los Alamos or on the way to WIPP, things could have been much worse. Thankfully, none of the other barrels packaged in the same way experienced a thermal runaway, and they were later collected and sealed in larger containers. Regardless, the consequences of the “cat-astrophe” were severe and very expensive. The cleanup involved shutting down the WIPP facility for several years and entirely replacing the ventilation system. WIPP itself didn’t formally reopen until January of 2017, nearly three full years after the incident, with the cleanup costing about half a billion dollars. Today, WIPP remains controversial, not least because of shifting timelines and public communication. Early estimates once projected closure by 2024. Now, that date is sometime between 2050 and 2085. And events like this only add fuel to the fire. Setting aside broader debates on nuclear weapons themselves, the wastes these weapons generate are dangerous now, and they will remain dangerous for generations. WIPP has even explored ideas on how to mark the site post-closure, making sure that future generations clearly understand the enduring danger. Radioactive hazards persist long after languages and societies may have changed beyond recognition, making it essential but challenging to communicate clearly about risks. Sometimes, it’s easy to forget - amidst all the technical complexity and bureaucratic red tape that surrounds anything nuclear - that it’s just people doing the work. It’s almost unbelievable that we entrust ourselves - squishy, sometimes hapless bags of water, meat, and bones - to navigate protocols of such profound complexity needed to safely take advantage of radioactive materials. I don’t tell this story because I think we should be paralyzed by the idea of using nuclear materials - there are enormous benefits to be had in many areas of science, engineering, and medicine. But there are enormous costs as well, many of which we might not be aware of if we don’t make it a habit to read obscure government investigation reports. This event is a reminder that the extent of our vigilance has to match the permanence of the hazards we create.

2 months ago 40 votes

More in science

How Sewage Recycling Works

[Note that this article is a transcript of the video embedded above.] Wichita Falls, Texas, went through the worst drought in its history in 2011 and 2012. For two years in a row, the area saw its average annual rainfall roughly cut in half, decimating the levels in the three reservoirs used for the city’s water supply. Looking ahead, the city realized that if the hot, dry weather continued, they would be completely out of water by 2015. Three years sounds like a long runway, but when it comes to major public infrastructure projects, it might as well be overnight. Between permitting, funding, design, and construction, three years barely gets you to the starting line. So the city started looking for other options. And they realized there was one source of water nearby that was just being wasted - millions of gallons per day just being flushed down the Wichita River. I’m sure you can guess where I’m going with this. It was the effluent from their sewage treatment plant. The city asked the state regulators if they could try something that had never been done before at such a scale: take the discharge pipe from the wastewater treatment plant and run it directly into the purification plant that produces most of the city’s drinking water. And the state said no. So they did some more research and testing and asked again. By then, the situation had become an emergency. This time, the state said yes. And what happened next would completely change the way cities think about water. I’m Grady and this is Practical Engineering. You know what they say, wastewater happens. It wasn’t that long ago that raw sewage was simply routed into rivers, streams, or the ocean to be carried away. Thankfully, environmental regulations put a stop to that, or at least significantly curbed the amount of wastewater being set loose without treatment. Wastewater plants across the world do a pretty good job of removing pollutants these days. In fact, I have a series of videos that go through some of the major processes if you want to dive deeper after this. In most places, the permits that allow these plants to discharge set strict limits on contaminants like organics, suspended solids, nutrients, and bacteria. And in most cases, they’re individualized. The permit limits are based on where the effluent will go, how that water body is used, and how well it can tolerate added nutrients or pollutants. And here’s where you start to see the issue with reusing that water: “clean enough” is a sliding scale. Depending on how water is going to be used or what or who it’s going to interact with, our standards for cleanliness vary. If you have a dog, you probably know this. They should drink clean water, but a few sips of a mud puddle in a dirty street, and they’re usually just fine. For you, that might be a trip to the hospital. Natural systems can tolerate a pretty wide range of water quality, but when it comes to drinking water for humans, it should be VERY clean. So the easiest way to recycle treated wastewater is to use it in ways that don’t involve people. That idea’s been around for a while. A lot of wastewater treatment plants apply effluent to land as a disposal method, avoiding the need for discharge to a natural water body. Water soaks into the ground, kind of like a giant septic system. But that comes with some challenges. It only works if you’ve got a lot of land with no public access, and a way to keep the spray from drifting into neighboring properties. Easy at a small scale, but for larger plants, it just isn’t practical engineering. Plus, the only benefits a utility gets from the effluent are some groundwater recharge and maybe a few hay harvests per season. So, why not send the effluent to someone else who can actually put it to beneficial use? If only it were that simple. As soon as a utility starts supplying water to someone else, things get complicated because you lose a lot of control over how the effluent is used. Once it's out of your hands, so to speak, it’s a lot harder to make sure it doesn’t end up somewhere it shouldn’t, like someone’s mouth. So, naturally, the permitting requirements become stricter. Treatment processes get more complicated and expensive. You need regular monitoring, sampling, and laboratory testing. In many places in the world, reclaimed water runs in purple pipes so that someone doesn’t inadvertently connect to the lines thinking they’re potable water. In many cases, you need an agreement in place with the end user, making sure they’re putting up signs, fences, and other means of keeping people from drinking the water. And then you need to plan for emergencies - what to do if a pipe breaks, if the effluent quality falls below the standards, or if a cross-connection is made accidentally. It’s a lot of work - time, effort, and cost - to do it safely and follow the rules. And those costs have to be weighed against the savings that reusing water creates. In places that get a lot of rain or snow, it’s usually not worth it. But in many US states, particularly those in the southwest, this is a major strategy to reduce the demand on fresh water supplies. Think about all the things we use water for where its cleanliness isn’t that important. Irrigation is a big one - crops, pastures, parks, highway landscaping, cemeteries - but that’s not all. Power plants use huge amounts of water for cooling. Street sweeping, dust control. In nearly the entire developed world, we use drinking-quality water to flush toilets! You can see where there might be cases where it makes good sense to reclaim wastewater, and despite all the extra challenges, its use is fairly widespread. One of the first plants was built in 1926 at Grand Canyon Village which supplied reclaimed water to a power plant and for use in steam locomotives. Today, these systems can be massive, with miles and miles of purple pipes run entirely separate from the freshwater piping. I’ve talked about this a bit on the channel before. I used to live near a pair of water towers in San Antonio that were at two different heights above ground. That just didn’t make any sense until I realized they weren’t connected; one of them was for the reclaimed water system that didn’t need as much pressure in the lines. Places like Phoenix, Austin, San Antonio, Orange County, Irvine, and Tampa all have major water reclamation programs. And it’s not just a US thing. Abu Dhabi, Beijing, and Tel Aviv all have infrastructure to make beneficial use of treated municipal wastewater, just to name a few. Because of the extra treatment and requirements, many places put reclaimed water in categories based on how it gets used. The higher the risk of human contact, the tighter the pollutant limits get. For example, if a utility is just selling effluent to farmers, ranchers, or for use in construction, exposure to the public is minimal. Disinfecting the effluent with UV or chlorine may be enough to meet requirements. And often that’s something that can be added pretty simply to an existing plant. But many reclaimed water users are things like golf courses, schoolyards, sports fields, and industrial cooling towers, where people are more likely to be exposed. In those cases, you often need a sewage plant specifically designed for the purpose or at least major upgrades to include what the pros call tertiary treatment processes - ways to target pollutants we usually don’t worry about and improve the removal rates of the ones we do. These can include filters to remove suspended solids, chemicals that bind to nutrients, and stronger disinfection to more effectively kill pathogens. This creates a conundrum, though. In many cases, we treat wastewater effluent to higher standards than we normally would in order to reclaim it, but only for nonpotable uses, with strict regulations about human contact. But if it’s not being reclaimed, the quality standards are lower, and we send it downstream. If you know how rivers work, you probably see the inconsistency here. Because in many places, down the river, is the next city with its water purification plant whose intakes, in effect, reclaim that treated sewage from the people upstream. This isn’t theoretical - it’s just the reality of how humans interact with the water cycle. We’ve struggled with the problems it causes for ages. In 1906, Missouri sued Illinois in the Supreme Court when Chicago reversed their river, redirecting its water (and all the city’s sewage) toward the Mississippi River. If you live in Houston, I hate to break it to you, but a big portion of your drinking water comes from the flushes and showers in Dallas. There have been times when wastewater effluent makes up half of the flow in the Trinity River. But the question is: if they can do it, why can’t we? If our wastewater effluent is already being reused by the city downstream to purify into drinking water, why can’t we just keep the effluent for ourselves and do the same thing? And the answer again is complicated. It starts with what’s called an environmental buffer. Natural systems offer time to detect failures, dilute contaminants, and even clean the water a bit—sunlight disinfects, bacteria consume organic matter. That’s the big difference in one city, in effect, reclaiming water from another upstream. There’s nature in between. So a lot of water reclamation systems, called indirect potable reuse, do the same thing: you discharge the effluent into a river, lake, or aquifer, then pull it out again later for purification into drinking water. By then, it’s been diluted and treated somewhat by the natural systems. Direct potable reuse projects skip the buffer and pipe straight from one treatment plant to the next. There’s no margin for error provided by the environmental buffer. So, you have to engineer those same protections into the system: real-time monitoring, alarms, automatic shutdowns, and redundant treatment processes. Then there’s the issue of contaminants of emerging concern: pharmaceuticals, PFAS [P-FAS], personal care products - things that pass through people or households and end up in wastewater in tiny amounts. Individually, they’re in parts per billion or trillion. But when you close the loop and reuse water over and over, those trace compounds can accumulate. Many of these aren’t regulated because they’ve never reached concentrations high enough to cause concern, or there just isn’t enough knowledge about their effects yet. That’s slowly changing, and it presents a big challenge for reuse projects. They can be dealt with at the source by regulating consumer products, encouraging proper disposal of pharmaceuticals (instead of flushing them), and imposing pretreatment requirements for industries. It can also happen at the treatment plant with advanced technologies like reverse osmosis, activated carbon, advanced oxidation, and bio-reactors that break down micro-contaminants. Either way, it adds cost and complexity to a reuse program. But really, the biggest problem with wastewater reuse isn’t technical - it’s psychological. The so-called “yuck factor” is real. People don’t want to drink sewage. Indirect reuse projects have a big benefit here. With some nature in between, it’s not just treated wastewater; it’s a natural source of water with treated wastewater in it. It’s kind of a story we tell ourselves, but we lose the benefit of that with direct reuse: Knowing your water came from a toilet—even if it’s been purified beyond drinking water standards—makes people uneasy. You might not think about it, but turning the tap on, putting that water in a glass, and taking a drink is an enormous act of trust. Most of us don’t understand water treatment and how it happens at a city scale. So that trust that it’s safe to drink largely comes from seeing other people do it and past experience of doing it over and over and not getting sick. The issue is that, when you add one bit of knowledge to that relative void of understanding - this water came directly from sewage - it throws that trust off balance. It forces you not to rely not on past experience but on the people and processes in place, most of which you don’t understand deeply, and generally none of which you can actually see. It’s not as simple as just revulsion. It shakes up your entire belief system. And there’s no engineering fix for that. Especially for direct potable reuse, public trust is critical. So on top of the infrastructure, these programs also involve major public awareness campaigns. Utilities have to put themselves out there, gather feedback, respond to questions, be empathetic to a community’s values, and try to help people understand how we ensure water quality, no matter what the source is. But also, like I said, a lot of that trust comes from past experience. Not everyone can be an environmental engineer or licensed treatment plant operator. And let’s be honest - utilities can’t reach everyone. How many public meetings about water treatment have you ever attended? So, in many places, that trust is just going to have to be built by doing it right, doing it well, and doing it for a long time. But, someone has to be first. In the U.S., at least on the city scale, that drinking water guinea pig was Wichita Falls. They launched a massive outreach campaign, invited experts for tours, and worked to build public support. But at the end of the day, they didn’t really have a choice. The drought really was that severe. They spent nearly four years under intense water restrictions. Usage dropped to a third of normal demand, but it still wasn’t enough. So, in collaboration with state regulators, they designed an emergency direct potable reuse system. They literally helped write the rules as they went, since no one had ever done it before. After two months of testing and verification, they turned on the system in July 2014. It made national headlines. The project ran for exactly one year. Then, in 2015, a massive flood ended the drought and filled the reservoirs in just three weeks. The emergency system was always meant to be temporary. Water essentially went through three treatment plants: the wastewater plant, a reverse osmosis plant, and then the regular water purification plant. That’s a lot of treatment, which is a lot of expense, but they needed to have the failsafe and redundancy to get the state on board with the project. The pipe connecting the two plants was above ground and later repurposed for the city’s indirect potable reuse system, which is still in use today. In the end, they reclaimed nearly two billion gallons of wastewater as drinking water. And they did it with 100% compliance with the standards. But more importantly, they showed that it could be done, essentially unlocking a new branch on the skill tree of engineering that other cities can emulate and build on.

18 hours ago 4 votes
Why JPEGs Still Rule the Web

A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail. For roughly three decades, the JPEG has been the World Wide Web’s primary image format. But it wasn’t the one the Web started with. In fact, the first mainstream graphical browser, NCSA Mosaic, didn’t initially support inline JPEG files—just inline GIFs, along with a couple of other formats forgotten to history. However, the JPEG had many advantages over the format it quickly usurped. aspect_ratio Despite not appearing together right away—it first appeared in Netscape in 1995, three years after the image standard was officially published—the JPEG and web browser fit together naturally. JPEG files degraded more gracefully than GIFs, retaining more of the picture’s initial form—and that allowed the format to scale to greater levels of success. While it wasn’t capable of animation, it progressively expanded from something a modem could pokily render to a format that was good enough for high-end professional photography. For the internet’s purposes, the degradation was the important part. But it wasn’t the only thing that made the JPEG immensely valuable to the digital world. An essential part was that it was a documented standard built by numerous stakeholders. The GIF was a de facto standard. The JPEG was an actual one How important is it that JPEG was a standard? Let me tell you a story. During a 2013 New York Times interview conducted just before he received an award honoring his creation, GIF creator Steve Wilhite stepped into a debate he unwittingly created. Simply put, nobody knew how to pronounce the acronym for the image format he had fostered, the Graphics Interchange Format. He used the moment to attempt to set the record straight—it was pronounced like the peanut butter brand: “It is a soft ‘G,’ pronounced ‘jif.’ End of story,” he said. I posted a quote from Wilhite on my popular Tumblr around that time, a period when the social media site was the center of the GIF universe. And soon afterward, my post got thousands of reblogs—nearly all of them disagreeing with Wilhite. Soon, Wilhite’s quote became a meme. The situation paints how Wilhite, who died in 2022, did not develop his format by committee. He could say it sounded like “JIF” because he built it himself. He was handed the project as a CompuServe employee in 1987; he produced the object, and that was that. The initial document describing how it works? Dead simple. 38 years later, we’re still using the GIF—but it never rose to the same prevalence of JPEG. The JPEG, which formally emerged about five years later, was very much not that situation. Far from it, in fact—it’s the difference between a de facto standard and an actual one. And that proved essential to its eventual ubiquity. We’re going to degrade the quality of this image throughout this article. At its full image size, it’s 13.7 megabytes.Irina Iriser How the JPEG format came to life Built with input from dozens of stakeholders, the Joint Photographic Experts Group ultimately aimed to create a format that fit everyone’s needs. (Reflecting its committee-led roots, there would be no confusion about the format’s name—an acronym of the organization that designed it.) And when the format was finally unleashed on the world, it was the subject of a more than 600-page book. JPEG: Still Image Data Compression Standard, written by IBM employees and JPEG organization stakeholders William B. Pennebaker and Joan L. Mitchell, describes a landscape of multimedia imagery, held back without a way to balance the need for photorealistic images and immediacy. Standardization, they believed, could fix this. “The problem was not so much the lack of algorithms for image compression (as there is a long history of technical work in this area),” the authors wrote, “but, rather, the lack of a standard algorithm—one which would allow an interchange of images between diverse applications.” And they were absolutely right. For more than 30 years, JPEG has made high-quality, high-resolution photography accessible in operating systems far and wide. Although we no longer need to compress JPEGs to within an inch of their life, having that capability helped enable the modern internet. As the book notes, Mitchell and Pennebaker were given IBM’s support to follow through this research and work with the JPEG committee, and that support led them to develop many of the JPEG format’s foundational patents. Described in patents filed by Mitchell and Pennebaker in 1988, IBM and other members of the JPEG standards committee, such as AT&T and Canon, were developing ways to use compression to make high-quality images easier to deliver in confined settings. Each member brought their own needs to the process. Canon, obviously, was more focused on printers and photography, while AT&T’s interests were tied to data transmission. Together, the companies left behind a standard that has stood the test of time. All this means, funnily enough, that the first place that a program capable of using JPEG compression appeared was not MacOS or Windows, but OS/2—a fascinating-but-failed graphical operating system created by Pennebaker and Mitchell’s employer, IBM. As early as 1990, OS/2 supported the format through the OS/2 Image Support application. At 50 percent of its initial quality, the image is down to about 2.6 MB. By dropping half of the image’s quality, we brought it down to one-fifth of the original file size. Original image: Irina Iriser What a JPEG does when you heavily compress it The thing that differentiates a JPEG file from a PNG or a GIF is how the data degrades as you compress it. The goal for a JPEG image is to still look like a photo when all is said and done, even if some compression is necessary to make it all work at a reasonable size. That way, you can display something that looks close to the original image in fewer bytes. Or, as Pennebaker and Mitchell put it, “the most effective compression is achieved by approximating the original image (rather than reproducing it exactly).” Central to this is a compression process called discrete cosine transform (DCT), a lossy form of compression encoding heavily used in all sorts of compressed formats, most notably in digital audio and signal processing. Essentially, it delivers a lower-quality product by removing details, while still keeping the heart of the original product through approximation. The stronger the cosine transformation, the more compressed the final result. The algorithm, developed by researchers in the 1970s, essentially takes a grid of data and treats it as if you’re controlling its frequency with a knob. The data rate is controlled like water from a faucet: The more data you want, the higher the setting. DCT allows a trickle of data to still come out in highly compressed situations, even if it means a slightly compromised result. In other words, you may not keep all the data when you compress it, but DCT allows you to keep the heart of it. (See this video for a more technical but still somewhat easy-to-follow description of DCT.) DCT is everywhere. If you have ever seen a streaming video or an online radio stream that degraded in quality because your bandwidth suddenly declined, you’ve witnessed DCT being utilized in real time. A JPEG file doesn’t have to leverage the DCT with just one method, as JPEG: Still Image Data Compression Standard explains: The JPEG standard describes a family of large image compression techniques, rather than a single compression technique. It provides a “tool kit” of compression techniques from which applications can select elements that satisfy their particular requirements. The toolkit has four modes: Sequential DCT, which displays the compressed image in order, like a window shade slowly being rolled down Progressive DCT, which displays the full image in the lowest-resolution format, then adds detail as more information rolls in Sequential lossless, which uses the window shade format but doesn’t compress the image Hierarchical mode, which combines the prior three modes—so maybe it starts with a progressive mode, then loads DCT compression slowly, but then reaches a lossless final result At the time the JPEG was being created, modems were extremely common. That meant images loaded slowly, making Progressive DCT the most fitting format for the early internet. Over time, the progressive DCT mode has become less common, as many computers can simply load the sequential DCT in one fell swoop. That same forest, saved at 5 percent quality. Down to about 419 kilobytes.Original image: Irina Iriser When an image is compressed with DCT, the change tends to be less noticeable in busier, more textured areas of the picture, like hair or foliage. Those areas are harder to compress, which means they keep their integrity longer. It tends to be more noticeable, however, with solid colors or in areas where the image sharply changes from one color to another—like text on a page. Ever screenshot a social media post, only for it to look noisy? Congratulations, you just made a JPEG file. Other formats, like PNG, do better with text, because their compression format is intended to be non-lossy. (Side note: PNG’s compression format, DEFLATE, was designed by Phil Katz, who also created the ZIP format. The PNG format uses it in part because it was a license-free compression format. So it turns out the brilliant coder with the sad life story improved the internet in multiple ways before his untimely passing.) In many ways, the JPEG is one tool in our image-making toolkit. Despite its age and maturity, it remains one of our best options for sharing photos on the internet. But it is not a tool for every setting—despite the fact that, like a wrench sometimes used as a hammer, we often leverage it that way. Forgent Networks claimed to own the JPEG’s defining algorithm The JPEG format gained popularity in the ’90s for reasons beyond the quality of the format. Patents also played a role: Starting in 1994, the tech company Unisys attempted to bill individual users who relied on GIF files, which used a patent the company owned. This made the free-to-use JPEG more popular. (This situation also led to the creation of the patent-free PNG format.) While the JPEG was standards-based, it could still have faced the same fate as the GIF, thanks to the quirks of the patent system. A few years before the file format came to life, a pair of Compression Labs employees filed a patent application that dealt with the compression of motion graphics. By the time anyone noticed its similarity to JPEG compression, the format was ubiquitous. Our forest, saved at 1 percent quality. This image is only about 239 KB in size, yet it’s still easily recognizable as the same photo. That’s the power of the JPEG.Original image: Irina Iriser Then in 1997, a company named Forgent Networks acquired Compression Labs. The company eventually spotted the patent and began filing lawsuits over it, a series of events it saw as a stroke of good luck. “The patent, in some respects, is a lottery ticket,” Forgent Chief Financial Officer Jay Peterson told CNET in 2005. “If you told me five years ago that ‘You have the patent for JPEG,’ I wouldn’t have believed it.” While Forgent’s claim of ownership of the JPEG compression algorithm was tenuous, it ultimately saw more success with its legal battles than Unisys did. The company earned more than $100 million from digital camera makers before the patent finally ran out of steam around 2007. The company also attempted to extract licensing fees from the PC industry. Eventually, Forgent agreed to a modest $8 million settlement. As the company took an increasingly aggressive approach to its acquired patent, it began to lose battles both in the court of public opinion and in actual courtrooms. Critics pounced on examples of prior art, while courts limited the patent’s use to motion-based uses like video. By 2007, Forgent’s compression patent expired—and its litigation-heavy approach to business went away. That year, the company became Asure Software, which now specializes in payroll and HR solutions. Talk about a reboot. Why the JPEG won’t die The JPEG file format has served us well. It’s been difficult to remove the format from its perch. The JPEG 2000 format, for example, was intended to supplant it by offering more lossless options and better performance. The format is widely used by the Library of Congress and specialized sites like the Internet Archive, however, it is less popular as an end-user format. See the forest JPEG degrade from its full resolution to 1 percent quality in this GIF. Original image: Irina Iriser Other image technologies have had somewhat more luck getting past the JPEG format. The Google-supported WebP is popular with website developers (and controversial with end users). Meanwhile, the formats AVIF and HEIC, each developed by standards bodies, have largely outpaced both JPEG and JPEG 2000. Still, the JPEG will be difficult to kill at this juncture. These days, the format is similar to MP3 or ZIP files—two legacy formats too popular and widely used to kill. Other formats that compress the files better and do the same things more efficiently are out there, but it’s difficult to topple a format with a 30-year head start. Shaking off the JPEG is easier said than done. I think most people will be fine to keep it around. Ernie Smith is the editor of Tedium, a long-running newsletter that hunts for the end of the long tail.

18 hours ago 3 votes
How to redraw a city

The planning trick that created Japan's famous urbanism

47 minutes ago 1 votes
Wet Labs Shouldn’t Be Boring (for young scientists) | Out-Of-Pocket

This is the first touchpoint for science, we should make it more enticing

3 hours ago 1 votes
The magic of through running

By weaving together existing railway lines, some cities can get the best transit in the world

yesterday 1 votes