Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
123
[Note that this article is a transcript of the video embedded above.] The Earth is pretty cool and all, but many of its most magnificent features make it tough for us to get around. When the topography is too wet, steep, treacherous, or prone to disaster, sometimes the only way forward is up: our roadways and walkways and railways break free from the surface using bridges. A lot of the infrastructure we rely on day to day isn’t necessarily picturesque. It’s not that we can’t build exquisite electrical transmission lines or stunning sanitary sewers. It’s just that we rarely want to bear the cost. But bridges are different. To an enthusiast of constructed works, many are downright breathtaking. There are so many ways to cross a gap, all kindred in function but contrary in form. And the typical way that engineers classify and name them is in how each design manages the incredible forces involved. Like everything in engineering, terminology and categories vary. As Alfred Korzybski said,...
a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Blog - Practical Engineering

Why are Smokestacks So Tall?

[Note that this article is a transcript of the video embedded above.] “The big black stacks of the Illium Works of the Federal Apparatus Corporation spewed acid fumes and soot over the hundreds of men and women who were lined up before the red-brick employment office.” That’s the first line of one of my favorite short stories, written by Kurt Vonnegut in 1955. It paints a picture of a dystopian future that, thankfully, didn’t really come to be, in part because of those stacks. In some ways, air pollution is kind of a part of life. I’d love to live in a world where the systems, materials and processes that make my life possible didn’t come with any emissions, but it’s just not the case... From the time that humans discovered fire, we’ve been methodically calculating the benefits of warmth, comfort, and cooking against the disadvantages of carbon monoxide exposure and particulate matter less than 2.5 microns in diameter… Maybe not in that exact framework, but basically, since the dawn of humanity, we’ve had to deal with smoke one way or another. Since, we can’t accomplish much without putting unwanted stuff into the air, the next best thing is to manage how and where it happens to try and minimize its impact on public health. Of course, any time you have a balancing act between technical issues, the engineers get involved, not so much to help decide where to draw the line, but to develop systems that can stay below it. And that’s where the smokestack comes in. Its function probably seems obvious; you might have a chimney in your house that does a similar job. But I want to give you a peek behind the curtain into the Illium Works of the Federal Apparatus Corporation of today and show you what goes into engineering one of these stacks at a large industrial facility. I’m Grady, and this is Practical Engineering. We put a lot of bad stuff in the air, and in a lot of different ways. There are roughly 200 regulated hazardous air pollutants in the United States, many with names I can barely pronounce. In many cases, the industries that would release these contaminants are required to deal with them at the source. A wide range of control technologies are put into place to clean dangerous pollutants from the air before it’s released into the environment. One example is coal-fired power plants. Coal, in particular, releases a plethora of pollutants when combusted, so, in many countries, modern plants are required to install control systems. Catalytic reactors remove nitrous oxides. Electrostatic precipitators collect particulates. Scrubbers use lime (the mineral, not the fruit) to strip away sulfur dioxide. And I could go on. In some cases, emission control systems can represent a significant proportion of the costs involved in building and operating a plant. But these primary emission controls aren’t always feasible for every pollutant, at least not for 100 percent removal. There’s a very old saying that “the solution to pollution is dilution.” It’s not really true on a global scale. Case in point: There’s no way to dilute the concentration of carbon dioxide in the atmosphere, or rather, it’s already as dilute as it’s going to get. But, it can be true on a local scale. Many pollutants that affect human health and the environment are short-lived; they chemically react or decompose in the atmosphere over time instead of accumulating indefinitely. And, for a lot of chemicals, there are concentration thresholds below which the consequences on human health are negligible. In those cases, dilution, or really dispersion, is a sound strategy to reduce their negative impacts, and so, in some cases, that’s what we do, particularly at major point sources like factories and power plants. One of the tricks to dispersion is that many plumes are naturally buoyant. Naturally, I’m going to use my pizza oven to demonstrate this. Not all, but most pollutants we care about are a result of combustion; burning stuff up. So the plume is usually hot. We know hot air is less dense, so it naturally rises. And the hotter it is, the faster that happens. You can see when I first start the fire, there’s not much air movement. But as the fire gets hotter in the oven, the plume speeds up, ultimately rising higher into the air. That’s the whole goal: get the plume high above populated areas where the pollutants can be dispersed to a minimally-harmful concentration. It sounds like a simple solution - just run our boilers and furnaces super hot to get enough buoyancy for the combustion products to disperse. The problem with the solution is that the whole reason we combust things is usually to recover the heat. So if you’re sending a lot of that heat out of the system, just because it makes the plume disperse better, you’re losing thermodynamic efficiency. It’s wasteful. That’s where the stack comes in. Let me put mine on and show you what I mean. I took some readings with the anemometers with the stack on and off. The airspeed with the stack on was around double with it off. About a meter per second compared with two. But it’s a little tougher to understand why. It’s intuitive that as you move higher in a column of fluid, the pressure goes down (since there’s less weight of the fluid above). The deeper you dive in a pool, the more pressure you feel. The higher you fly in a plane or climb a mountain, the lower the pressure. The slope of that line is proportional to a fluid’s density. You don’t feel much of a pressure difference climbing a set of stairs because air isn’t very dense. If you travel the same distance in water, you’ll definitely notice the difference. So let’s look at two columns of fluid. One is the ambient air and the other is the air inside a stack. Since it’s hotter, the air inside the stack is less dense. Both columns start at the same pressure at the bottom, but the higher you go, the more the pressure diverges. It’s kind of like deep sea diving in reverse. In water, the deeper you go into the dense water, the greater the pressure you feel. In a stack, the higher you are in a column of hot air, the more buoyant you feel compared to the outside air. This is the genius of a smoke stack. It creates this difference in pressure between the inside and outside that drives greater airflow for a given temperature. Here’s the basic equation for a stack effect. I like to look at equations like this divided into what we can control and what we can’t. We don’t get to adjust the atmospheric pressure, the outside temperature, and this is just a constant. But you can see, with a stack, an engineer now has two knobs to turn: the temperature of the gas inside and the height of the stack. I did my best to keep the temperature constant in my pizza oven and took some airspeed readings. First with no stack. Then with the stock stack. Then with a megastack. By the way, this melted my anemometer; should have seen that coming. Thankfully, I got the measurements before it melted. My megastack nearly doubled the airspeed again at around three-and-a-half meters per second versus the two with just the stack that came with the oven. There’s something really satisfying about this stack effect to me. No moving parts or fancy machinery. Just put a longer pipe and you’ve fundamentally changed the physics of the whole situation. And it’s a really important tool in the environmental engineer’s toolbox to increase airflow upward, allowing contaminants to flow higher into the atmosphere where they can disperse. But this is not particularly revolutionary… unless you’re talking about the Industrial Revolution. When you look at all the pictures of the factories in the 19th century, those stacks weren’t there to improve air quality, if you can believe it. The increased airflow generated by a stack just created more efficient combustion for the boilers and furnaces. Any benefits to air quality in the cities were secondary. With the advent of diesel and electric motors, we could use forced drafts, reducing the need for a tall stack to increase airflow. That was kind of the decline of the forests of industrial chimneys that marked the landscape in the 19th century. But they’re obviously not all gone, because that secondary benefit of air quality turned into the primary benefit as environmental rules about air pollution became stricter. Of course, there are some practical limits that aren’t taken into account by that equation I showed. The plume cools down as it moves up the stack to the outside, so its density isn’t constant all the way up. I let my fire die down a bit so it wouldn’t melt the thermometer (learned my lesson), and then took readings inside the oven and at the top of the stack. You can see my pizza oven flue gas is around 210 degrees at the top of the mega-stack, but it’s roughly 250 inside the oven. After the success of the mega stack on my pizza oven, I tried the super-mega stack with not much improvement in airflow: about 4 meters per second. The warm air just got too cool by the time it reached the top. And I suspect that frictional drag in the longer pipe also contributed to that as well. So, really, depending on how insulating your stack is, our graph of height versus pressure actually ends up looking like this. And this can be its own engineering challenge. Maybe you’ve gotten back drafts in your fireplace at home because the fire wasn’t big or hot enough to create that large difference in pressure. You can see there are a lot of factors at play in designing these structures, but so far, all we’ve done is get the air moving faster. But that’s not the end goal. The purpose is to reduce the concentration of pollutants that we’re exposed to. So engineers also have to consider what happens to the plume once it leaves the stack, and that’s where things really get complicated. In the US, we have National Ambient Air Quality Standards that regulate six so-called “criteria” pollutants that are relatively widespread: carbon monoxide, lead, nitrogen dioxide, ozone, particulates, and sulfur dioxide. We have hard limits on all these compounds with the intention that they are met at all times, in all locations, under all conditions. Unfortunately, that’s not always the case. You can go on EPA’s website and look at the so-called “non-attainment” areas for the various pollutants. But we do strive to meet the standards through a list of measures that is too long to go into here. And that is not an easy thing to do. Not every source of pollution comes out of a big stationary smokestack where it’s easy to measure and control. Cars, buses, planes, trucks, trains, and even rockets create lots of contaminants that vary by location, season, and time of day. And there are natural processes that contribute as well. Forests and soil microbes release volatile organic compounds that can lead to ozone formation. Volcanic eruptions and wildfires release carbon monoxide and sulfur dioxide. Even dust storms put particulates in the air that can travel across continents. And hopefully you’re seeing the challenge of designing a smoke stack. The primary controls like scrubbers and precipitators get most of the pollutants out, and hopefully all of the ones that can’t be dispersed. But what’s left over and released has to avoid pushing concentrations above the standards. That design has to work within the very complicated and varying context of air chemistry and atmospheric conditions that a designer has no control over. Let me show you a demo. I have a little fog generator set up in my garage with a small fan simulating the wind. This isn’t a great example because the airflow from the fan is pretty turbulent compared to natural winds. You occasionally get some fog at the surface, but you can see my plume mainly stays above the surface, dispersing as it moves with the wind. But watch what happens when I put a building downstream. The structure changes the airflow, creating a downwash effect and pulling my plume with it. Much more frequently you see the fog at the ground level downstream. And this is just a tiny example of how complex the behavior of these plumes can be. Luckily, there’s a whole field of engineering to characterize it. There are really just two major transport processes for air pollution. Advection describes how contaminants are carried along by the wind. Diffusion describes how those contaminants spread out through turbulence. Gravity also affects air pollution, but it doesn’t have a significant effect except on heavier-than-air particulates. With some math and simplifications of those two processes, you can do a reasonable job predicting the concentration of any pollutant at any point in space as it moves and disperses through the air. Here’s the basic equation for that, and if you’ll join me for the next 2 hours, we’ll derive this and learn the meaning of each term… Actually, it might take longer than that, so let’s just look at a graphic. You can see that as the plume gets carried along by the wind, it spreads out in what’s basically a bell curve, or gaussian distribution, in the planes perpendicular to the wind direction. But even that is a bit too simplified to make any good decisions with, especially when the consequences of getting it wrong are to public health. A big reason for that is atmospheric stability. And this can make things even more complicated, but I want to explain the basics, because the effect on plumes of gas can be really dramatic. You probably know that air expands as it moves upward; there’s less pressure as you go up because there is less air above you. And as any gas expands, it cools down. So there’s this relationship between height and temperature we call the adiabatic lapse rate. It’s about 10 degrees Celsius for every kilometer up or about 28 Fahrenheit for every mile up. But the actual atmosphere doesn’t always follow this relationship. For example, rising air parcels can cool more slowly than the surrounding air. This makes them warmer and less dense, so they keep rising, promoting vertical motion in a positive feedback loop called atmospheric instability. You can even get a temperature inversion where you have cooler air below warmer air, something that can happen in the early morning when the ground is cold. And as the environmental lapse rate varies from the adiabatic lapse rate, the plumes from stacks change. In stable conditions, you usually get a coning plume, similar to what our gaussian distribution from before predicts. In unstable conditions, you get a lot of mixing, which leads to a looping plume. And things really get weird for temperature inversions because they basically act like lids for vertical movement. You can get a fanning plume that rises to a point, but then only spreads horizontally. You can also get a trapping plume, where the air gets stuck between two inversions. You can have a lofting plume, where the air is above the inversion with stable conditions below and unstable conditions above. And worst of all, you can have a fumigating plume when there are unstable conditions below an inversion, trapping and mixing the plume toward the ground surface. And if you pay attention to smokestacks, fires, and other types of emissions, you can identify these different types of plumes pretty easily. Hopefully you’re seeing now how much goes into this. Engineers have to keep track of the advection and diffusion, wind speed and direction, atmospheric stability, the effects of terrain and buildings on all those factors, plus the pre-existing concentrations of all the criteria pollutants from other sources, which vary in time and place. All that to demonstrate that your new source of air pollution is not going to push the concentrations at any place, at any time, under any conditions, beyond what the standards allow. That’s a tall order, even for someone who loves gaussian distributions. And often the answer to that tall order is an even taller smokestack. But to make sure, we use software. The EPA has developed models that can take all these factors into account to simulate, essentially, what would happen if you put a new source of pollution into the world and at what height. So why are smokestacks so tall? I hope you’ll agree with me that it turns out to be a pretty complicated question. And it’s important, right? These stacks are expensive to build and maintain. Those costs trickle down to us through the costs of the products and services we buy. They have a generally negative visual impact on the landscape. And they have a lot of other engineering challenges too, like resonance in the wind. And on the other hand, we have public health, arguably one of the most critical design criteria that can exist for an engineer. It’s really important to get this right. I think our air quality regulations do a lot to make sure we strike a good balance here. There are even rules limiting how much credit you can get for building a stack higher for greater dispersion to make sure that we’re not using excessively tall stacks in lieu of more effective, but often more expensive, emission controls and strategies. In a perfect world, none of the materials or industrial processes that we rely on would generate concentrated plumes of hazardous gases. We don’t live in that perfect world, but we are pretty fortunate that, at least in many places on Earth, air quality is something we don’t have to think too much about. And to thank for it, we have a relatively small industry of environmental professionals who do think about it, a whole lot. You know, for a lot of people, this is their whole career; what they ponder from 9-5 every day. Something most of us would rather keep out of mind, they face it head-on, developing engineering theories, professional consensus, sensible regulations, modeling software, and more - just so we can breathe easy.

19 hours ago 3 votes
The Most Implausible Tunneling Method

[Note that this article is a transcript of the video embedded above.] The original plan to get I-95 over the Baltimore Harbor was a double-deck bridge from Fort McHenry to Lazaretto Point. The problem with the plan was this: the bridge would have to be extremely high so that large ships could pass underneath, dwarfing and overshadowing one of the US’s most important historical landmarks. Fort McHenry famously repelled a massive barrage and attack from the British Navy in the War of 1812, and inspired what would later become the national anthem. An ugly bridge would detract from its character, and a beautiful one would compete for it. So they took the high road by building a low road and decided to go underneath the harbor instead. Rather than bore a tunnel through the soil and rock below like the Channel Tunnel, the entire thing was prefabricated in sections and installed from the water surface above - a construction technique called immersed tube tunneling. This seems kind of simple at first, but the more you think about it, the more you realize how complicated it actually is to fabricate tunnel sections the length of a city block, move them into place, and attach them together so watertight and safe that, eventually, you can drive or take a train from one side to the other. Immersed tube construction makes tunneling less like drilling a hole and more like docking a spacecraft. Materials and practices vary across the world, but I want to try and show you, at least in a general sense, how this works. I’m Grady, and this is Practical Engineering. One of the big problems with bridges over navigable waterways is that they have to be so tall. Building high up isn’t necessarily the challenge; it’s getting up and back down. There are limits to how steep a road can be for comfort, safety, and efficiency, and railroads usually have even stricter constraints on grade. That means the approaches to high bridges have to be really long, increasing costs and, in dense cities, taking up more valuable space. This is one of the ways that building a tunnel can be a better option; They greatly reduce the amount of land at the surface needed for approaches. But traditional tunnels built using boring have to be installed somewhat deep into the ground, maintaining significant earth between the roof of the tunnel and the water for stability and safety. Since they’re installed from above, immersed tube tunnels don’t have the same problem. It’s basically a way to get the shortest tunnel possible for a given location, which often means the cheapest tunnel too. That’s a big deal, because tunnels are just about the most expensive way to get from point A to point B. Anything you can do to reduce their size goes a long way. And there are other advantages too. Tunnel boring machines make one shape: a circle. It’s not the best shape for a tunnel, in a lot of ways. Often there’s underutilized space at the top and bottom - excavation you had to perform because of the machinery that is mostly just a waste. Immersed tubes can be just about any shape you need, making them ideal for wider tunnels like combined road and rail routes where a circular cross-section isn’t a good fit. One of the other benefits of immersed tubes is that most of the construction happens on dry land. I probably don’t have to say this, but building stuff while underground or underwater is complex and difficult work. It requires specialty equipment, added safety measures, and a lot of extra expense. Immersed tube sections are built in dry docks or at a shipyard where it's much easier to deliver materials and accomplish the bulk of the actual construction work. Once tunnel sections are fabricated, they have to be moved into place, and I think this is pretty clever. These sections can be enormous - upwards of 650 feet or 200 meters long. But they’re still mostly air. So if you put a bulkhead on either side to trap that air inside, they float. You can just flood the dry dock, hook up some tugboats, and tow them out like a massive barge. Interestingly, the transportation method means that the tunnel segments have to be designed to work as a watercraft first. The weight, buoyancy, and balance of each section are engineered to keep them stable in the water and avoid tipping or rolling before they have to be stable as a structure. Once in place, a tunnel segment is handed over to the apparatus that will set it into place. In most cases, this is a catamaran-style behemoth called a lay barge. Two working platforms are connected by girders, creating a huge floating gantry crane. Internal tanks are filled with water to act as ballast, allowing the segment to sink. But when it gets to the bottom, it doesn’t just sit on the sea or channel floor below. And this is another benefit of immersed tube construction. Especially in navigable waterways, you need to protect a tunnel from damage from strong currents, curious sea life, and ship anchors. So most immersed tube tunnels sit in a shallow trench, excavated using a clamshell or suction dredger. Most waterways have a thick layer of soft sediment at the surface - not exactly ideal as a foundation. This is another reason most boring machines have to be in deeper material. Drilling through soft sediment is prone to problems. Imagine using a power drill to make a nice, clean hole through pudding. But, at least in part due to being full of buoyant air, immersed tubes aren’t that heavy; in fact, in most cases, they’re lighter than the soil that was there in the first place, so the soft sediment really isn’t a problem. You don’t need a complicated foundation. In many cases, it’s just a layer of rock or gravel placed at the bottom of the trench, usually using a fall pipe (like a big garden hose for gravel) to control the location. This layer is then carefully leveled using a steel screed that is dragged over the top like an underwater bulldozer. Even in deep water, the process can achieve a remarkably accurate surface level for the tunnel segments to rest on. The lowering process is the most delicate and important part of construction. The margins are tight because any type of misalignment may make it impossible for the segment to seal against its neighbor. Normally, you’d really want to take your time with this kind of thing, but here, the work usually has to happen in a narrow window to avoid weather, tides, and disruption to ship traffic. The tunnel section is fitted with rubber seals around its face, creating a gasket. Sometimes, the segment will also have a surveying tower that pokes above the water surface, allowing for measurements and fine adjustments to be made as it’s set into place. In some cases, the lowering equipment can also nudge the segment against its neighbor. In other cases, hydraulic jacks are used to pull the segments together. Divers or remotely operated submersibles can hook up the jacks. Or couplers, just like those used on freight trains, can do it without any manual underwater intervention. The jacks extend to couple the free segment to the one already installed, then retract to pull them together, compressing the gasket and sealing the area between the two bulkheads. This joint is the most important part of an immersed tunnel design. It has to be installed blindly and accommodate small movements from temperature changes, settlement, and changes in pressure as water levels go up and down. The gasket provides the initial seal, but there’s more to it. Once in place, valves are opened in the bulkheads to drain the water between them. That actually creates a massive pressure difference between one side of the segment and the other. Hydrostatic force from the water pushes against the end of the tunnel, putting it in even firmer contact with its neighbor and creating a stronger seal. Once in its final place, the segment can be backfilled. The tunnel segment connection is not like a pipe flange, where the joints are securely bolted together, completely restraining any movement. The joints on immersed tunnels have some freedom to move. Of course, there is a restraint for axial compression since the segments butt up against each other. In addition, keys or dowels are usually installed along the joint so that shear forces can transfer between segments, keeping the ends from shifting during settlement or small sideways movements. However, the joints aren’t designed to transfer torque, called moments. And there’s rarely much mechanical restraint to axial tension that might pull one joint away from the other. So you can see why the backfill is so important. It locks each segment into place. In fact, the first layer of backfill is called locking fill for that exact reason. I don’t think they make underwater roller compactors, and you wouldn’t want strong vibrations disturbing the placement of the tunnel segments anyway. So this material is made from angular rock that self-compacts and is placed using fall pipes in careful layers to secure each segment without shifting or disturbing it. After that, general backfill - maybe even the original material if it wasn’t contaminated - can be used in the rest of the trench, and then a layer is placed over the top of everything to protect the backfill and tunnel against currents caused by ships and tides. Sometimes this top layer includes bands of large rock meant to release a ship’s anchor from the bottom, keeping it from digging in and damaging the tunnel. Once a tunnel segment is secured in place, the bulkhead in the previous segment can be removed from the inside, allowing access inside the joint. The usual requirement is that access is only allowed when there are two or more bulkheads between workers and the water outside. A second seal, called an omega seal (because of its shape), then gets installed around the perimeter of the joint. And the process keeps going, adding segments to the tunnel until it’s a continuous, open path from one end to the other. When it reaches that point, all the other normal tunnel stuff can be installed, like roadways, railways, lights, ventilation, drainage, and pumps. By the time it’s ready to travel through, there’s really no obvious sign from inside that immersed tube tunnels are any different than those built using other methods. This is a simplification, of course. Every one of these steps is immensely complicated, unique to each jobsite, and can take weeks to months, to even years to complete. And as impressive as the process is, it’s not without its downsides. The biggest one is damage to the sea or river floor during construction. Where boring causes little disturbance at the surface, immersed tube construction requires a lot of dredging. That can disrupt and damage important habitat for wildlife. It also kicks up a lot of sediment into suspension, clouding the water and potentially releasing buried contaminants that were laid down back when environmental laws were less strict. Some of these impacts can be mitigated: Sealed clamshell buckets reduce turbidity and mobilization of contaminated sediment. And construction activities can be scheduled to avoid sensitive periods like migration of important species. But some level of disturbance is inevitable and has to be weighed against the benefits of the project. Despite the challenges, around 150 of these tunnels have been built around the globe. Some of the most famous include the Øresund Link between Denmark and Sweden, the Busan-Geoje tunnel in South Korea, the Marmaray tunnel crossing the Bosphorus in Turkey, of course, the Fort McHenry tunnel in Baltimore I mentioned earlier, and the BART Transbay Tube between Oakland and San Francisco. And some of the most impressive projects are under construction now, including the Fehmarn Belt between Denmark and Germany, which will be the world’s longest immersed tunnel. My friend Fred produced a really nice documentary about that project on The B1M channel if you want to learn more about it, and the project team graciously shared a lot of very cool clips used in this video too. There’s something about immersed tube tunnels that I can’t quite get over. At a glance, it’s dead simple - basically like assembling lego blocks. But the reality is that the process is so complicated and intricate, more akin to building a moon base. Giant concrete and steel segments floated like ships, carefully sunk into enormous trenches, precisely maneuvered for a perfect fit while completely submerged in sometimes high-traffic areas of the sea, with tides, currents, wildlife, and any number of unexpected marine issues that could pop up. And then you just drive through it like it’s any old section of highway. I love that stuff.

2 weeks ago 15 votes
When Abandoned Mines Collapse

[Note that this article is a transcript of the video embedded above.] In December of 2024, a huge sinkhole opened up on I-80 near Wharton, New Jersey, creating massive traffic delays as crews worked to figure out what happened and get it fixed. Since then, it happened again in February 2025 and then again in March. Each time, the highway had to be shut down, creating a nightmare for commuters who had to find alternate routes. And it’s a nightmare for the DOT, too, trying to make sure this highway is safe to drive on despite it literally collapsing into the earth. From what we know so far, this is not a natural phenomenon, but one that’s human-made. It looks like all these issues were set in motion more than a century ago when the area had numerous underground iron mines. This is a really complex issue that causes problems around the world, and I built a little model mine in my garage to show you why it’s such a big deal. I’m Grady and this is Practical Engineering. We’ve been extracting material and minerals from the earth since way before anyone was writing things down. It’s probably safe to say that things started at the surface. You notice something shiny or differently colored on the side of a hill or cliff and you take it out. Over time, we built up knowledge about what materials were valuable, where they existed, and how to efficiently extract them from the earth. But, of course, there’s only so much earth at the surface. Eventually, you have to start digging. Maybe you follow a vein of gold, silver, copper, coal or sulfur down below the surface. And things start to get more complicated because now you’re in a hole. And holes are kind of dangerous. They’re dark, they fill with water, they can collapse, and they collect dangerous gases. So, in many cases, even today, it makes sense to remove the overburden - the soil and rock above the mineral or material you’re after. Mining on the surface has a lot of advantages when it comes to cost and safety. But there are situations where surface mining isn’t practical. Removing overburden is expensive, and it gets more expensive the deeper you go. It also has environmental impacts like habitat destruction and pollution of air and water. So, as technology, safety, and our understanding of soil and rock mechanics grew, so did our ability to go straight to the source and extract minerals underground. One of the major materials that drove the move to underground mining was coal. It’s usually found in horizontal formations called seams, that formed when vast volumes of paleozoic plants were buried and then crushed and heated over geologic time. At the start of the Industrial Revolution, coal quickly became a primary source of energy for steam engines, steel refining, and electricity generation. Those coal seams vary in thickness, and they vary in depth below the surface too, so many early coal mines were underground. In the early days of underground mining, there was not a lot of foresight. Some might argue that’s still true, but it was a lot more so a couple hundred years ago. Coal mining companies weren’t creating detailed maps of their mines, and even if they did, there was no central archive to send them to. And they just weren’t that concerned about the long-term stability of the mines once the resources had been extracted. All that mattered was getting coal out of the ground. Mining companies came and went, dissolved or were acquired, and over time, a lot of information about where mines existed and their condition was just lost. And even though many mines were in rural areas, far away from major population centers, some weren’t, and some of those rural areas became major population centers without any knowledge about what had happened underneath them decades ago. An issue that confounds the problem of mine subsidence is that in a lot of places, property ownership is split into two pieces: surface rights and mineral rights. And those rights can be owned by different people. So if you’re a homeowner, you may own the surface rights to your land, while a company owns the right to drill or mine under your property. That doesn’t give them the right to damage your property, but it does make things more complicated since you don’t always have a say in what’s happening beneath the surface. There are myriad ways to build and operate underground mines, but especially for soft rock mining, like coal, the predominant method for decades was called “room and pillar”. This is exactly what it sounds like. You excavate the ore, bringing material to the surface. But you leave columns to support the roof. The size, shape, and spacing of columns are dictated by the strength of the material. This is really important because a mine like this has major fixed costs: exploration, planning, access, ventilation, and haulage. It’s important to extract as much as possible, and every column you leave supporting the roof is valuable material you can’t recover. So, there’s often not a lot of margin in these pillars. They’re as small as the company thought they could get away with before they were finished mining. I built a little room and pillar mine in my garage. I’ll be the first to admit that this little model is not a rigorous reproduction of an actual geologic formation. My coal seam is just made of cardboard, and the bright colors are just for fun. But, I’m hoping this can help illustrate the challenges associated with this type of mine. I’ve got a little rainfall simulator set up, because water plays a big role in these processes. This first rainfall isn’t necessarily representative of real life, since it’s really just compacting the loose sand. But it does give a nice image of how subsidence works in general. You can see the surface of the ground sinking as the sand compacts into place. But you can also see that as the water reaches the mine, things start to deform. In a real mine, this is true, too. Stresses in the surrounding soil and rock redistribute over time from long-term movements, relaxation of stresses that were already built up in the materials before extraction, and from water. I ran this model for an entire day, turning the rainfall on and off to simulate a somewhat natural progression of time in the subsurface. By the end of the day, the mine hadn’t collapsed, but it was looking a great deal less stable than when it started. And that’s one big thing you can learn from this model - in a lot of cases, these issues aren’t linearly progressive. They can happen in fits and starts, like this small leak in the roof of the mine. You get a little bit of erosion of soil, but eventually, enough sand built up that it kind of healed itself, and, for a while, you can’t see any evidence of any of it at the surface. The geology essentially absorbed the sinkhole by redistributing materials and stresses so there’s no obvious sign at the surface that anything wayward is happening below. In the US, there were very few regulations on mining until the late 19th century, and even those focused primarily on the safety of the workers. There just wasn’t that much concern about long-term stability. So as soon as material was extracted, mines were abandoned. The already iffy columns were just left alone, and no one wasted resources on additional supports or shoring. They just walked away. One thing that happens when mines are abandoned is that they flood. Without the need to work inside, the companies stop pumping out the water. I can simulate this on my model by just plugging up the drain. In a real soft rock mine, there can be minerals like gypsum and limestone that are soluble in water. Repeated cycles of drying and wetting can slowly dissolve them away. Water can also soften certain materials and soils, reducing their mechanical strength to withstand heavy loads, just like my cardboard model. And then, of course, water simply causes erosion. It can literally carry soil particles with it, again, causing voids and redistribution of stresses in the subsurface. This is footage from an old video I did demonstrating how sinkholes can form. The ways that mine subsidence propagates to the surface can vary a lot, based on the geology and depth of the mine. For collapses near the surface, you often see well-defined sinkholes where the soil directly above the mine simply falls into the void. And this is usually a sudden phenomenon. I flooded and drained my little mine a few times to demonstrate this. Accidentally flooded my little town a few times in the process, but that’s okay. You can see in my model, after flooding the mine and draining it down, there was a partial failure in the roof and a pile of sand toward the back caved in. And on the surface, you see just a small sinkhole. In 2024, a huge hole opened right in the center of a sports complex in Alton, Illinois. It was quickly determined that part of an active underground aggregate mine below the park had collapsed, leading to the sinkhole. It’s pretty characteristic of these issues. You don’t know where they’re going to happen, and you don’t know how the surface soils are going to react to what’s happening underneath. Subsidence can also look like a generalized and broader sinking and settling over a large area. You can see in my model that most of the surface still looks pretty flat, despite the fact that it started here and is now down here as the mine supports have softened and deformed. This can also be the case when mines are deeper in the ground. Even if the collapse is sudden, the subsidence is less dramatic because the geology can shift and move to redistribute the stresses. And the subsidence happens more slowly as the overburden settles into a new configuration. In all cases, the subsidence can extend laterally from the mine, so impacted areas aren’t always directly above. The deeper the mine, the wider the subsidence can be. I ran my little mine demo for quite a few cycles of wet and dry just to see how bad things would get. And I admit I used a little percussion at the end to speed things along. Let’s say this is a simulation of an earthquake on an abandoned mine. [Beat] You can see that by the end of it, this thing has basically collapsed. And take a look at the surface now. You have some defined sinkholes for sure. And you also have just generalized subsidence - sloped and wavy areas that were once level. And you can imagine the problems this can cause. Structures can easily be damaged by differential settlement. Pipes break. Foundations shift and crack. Even water can drain differently than before, causing ponding and even changing the course of rivers and streams for large areas. And even if there are no structures, subsidence can ruin high-value farm land, mess up roads, disrupt habitat, and more. In many cases, the company that caused all the damage is long gone. Essentially they set a ticking time bomb deep below the ground with no one knowing if or when it would go off. There’s no one to hold accountable for it, and there’s very little recourse for property owners. Typical property insurance specifically excludes damage from mine subsidence. So, in some places where this is a real threat, government-subsidized insurance programs have been put in place. Eight states in the US, those where coal mining was most extensive, have insurance pools set up. In a few of those states, it is a requirement in order to own property. The federal government in the US also collects a fee from coal mines that goes into a fund that helps cover reclamation costs of mines abandoned before 1977 when the law went into effect. That federal mining act also required modern mines to use methods to prevent subsidence, or control its effects, because this isn’t just a problem with historic abandoned mines. Some modern underground soft rock mining doesn’t use the room and pillar method but instead a process called longwall mining. Like everything in mining, there are multiple ways to do it. But here’s the basic method: Hydraulic jacks support the roof of the mine in a long line. A machine called a shearer travels along the face of the seam with cutting drums. The cut coal falls onto a conveyor and is transported to the surface. The roof supports move forward into the newly created cavity, intentionally allowing the roof behind them to collapse. It’s an incredibly efficient form of mining, and you get to take the whole seam, rather than leaving pillars behind to support the roof. But, obviously, in this method, subsidence at the surface is practically inevitable. Minimizing the harm that subsidence creates starts just by predicting its extent and magnitude. And, just looking at my model, I think you can guess that this isn’t a very easy problem to solve. Engineers use a mix of empirical information, like data from similar past mining operations, geotechnical data, simplified relationships, and in some cases detailed numerical modeling that accounts for geologic and water movement over time. But you don’t just have to predict it. You also have to measure it to see if your predictions were right. So mining companies use instruments like inclinometers and extensometers above underground mines to track how they affect the surface. I have a whole video about that kind of instrumentation if you want to learn more after this. The last part of that is reclamation - to repair or mitigate the damage that’s been done. And this can vary so much depending on where the mine is, what’s above it, and how much subsidence occurs. It can be as simple as filling and grading land that has subsided all the way to extensive structural retrofits to buildings above a mine before extraction even starts. Sinkholes are often repaired by backfilling with layers of different-sized materials, from large at the bottom to small at top. That creates a filter to keep soil from continuing to erode downward into the void. Larger voids can be filled with grout or even polyurethane foam to stabilize the ground above, reducing the chance for a future collapse. I know coal - and mining in general - can be a sensitive topic. Most of us don’t have a lot of exposure to everything that goes into obtaining the raw resources that make modern life possible. And the things we do see and hear are usually bad things like negative environmental impacts or subsidence. But I really think the story of subsidence isn’t just one of “mining is bad” but really “mining used to be bad, and now it’s a lot better, but there are still challenges to overcome.” I guess that’s the story of so many things in engineering - addressing the difficulties we used to just ignore. And this video isn’t meant to fearmonger. This is a real issue that causes real damages today, but it’s also an issue that a lot of people put a great deal of thought, effort, and ultimately resources into so that we can strike a balance between protection against damage to property and the environment and obtaining the resources that we all depend on.

4 weeks ago 14 votes
When Kitty Litter Caused a Nuclear Catastrophe

[Note that this article is a transcript of the video embedded above.] Late in the night of Valentine’s Day 2014, air monitors at an underground nuclear waste repository outside Carlsbad, New Mexico, detected the release of radioactive elements, including americium and plutonium, into the environment. Ventilation fans automatically switched on to exhaust contaminated air up through a shaft, through filters, and out to the environment above ground. When filters were checked the following morning, technicians found that they contained transuranic materials, highly radioactive particles that are not naturally found on Earth. In other words, a container of nuclear waste in the repository had been breached. The site was shut down and employees sent home, but it would be more than a year before the bizarre cause of the incident was released. I’m Grady, and this is Practical Engineering. The dangers of the development of nuclear weapons aren’t limited to mushroom clouds and doomsday scenarios. The process of creating the exotic, transuranic materials necessary to build thermonuclear weapons creates a lot of waste, which itself is uniquely hazardous. Clothes, tools, and materials used in the process may stay dangerously radioactive for thousands of years. So, a huge part of working with nuclear materials is planning how to manage waste. I try not to make predictions about the future, but I think it’s safe to say that the world will probably be a bit different in 10,000 years. More likely, it will be unimaginably different. So, ethical disposal of nuclear waste means not only protecting ourselves but also protecting whoever is here long after we are ancient memories or even forgotten altogether. It’s an engineering challenge pretty much unlike any other, and it demands some creative solutions. The Waste Isolation Pilot Plant, or WIPP, was built in the 1980s in the desert outside Carlsbad, New Mexico, a site selected for a very specific reason: salt. One of the most critical jobs for long-term permanent storage is to keep radioactive waste from entering groundwater and dispersing into the environment. So, WIPP was built inside an enormous and geologically stable formation of salt, roughly 2000 feet or 600 meters below the surface. The presence of ancient salt is an indication that groundwater doesn’t reach this area since the water would dissolve it. And the salt has another beneficial behavior: it’s mobile. Over time, the walls and ceilings of mined-out salt tend to act in a plastic manner, slowly creeping inwards to fill the void. This is ideal in the long term because it will ultimately entomb the waste at WIPP in a permanent manner. It does make things more complicated in the meantime, though, since they have to constantly work to keep the underground open during operation. This process, called “ground control,” involves techniques like drilling and installing roof bolts in epoxy to hold up the ceilings. I have an older video on that process if you want to learn more after this. The challenge in this case is that, eventually, we want the roof bolts to fail, allowing a gentle collapse of salt to fill the void because it does an important job. The salt, and just being deep underground in general, acts to shield the environment from radiation. In fact, a deep salt mine is such a well-shielded area that there’s an experimental laboratory located in WIPP across on the other side of the underground from the waste panels where various universities do cutting-edge physics experiments precisely because of the low radiation levels. The thousands of feet of material above the lab shield it from cosmic and solar radiation, and the salt has much lower levels of inherent radioactivity than other kinds of rock. Imagine that: a low-radiation lab inside a nuclear waste dump. Four shafts extend from the surface into the underground repository for moving people, waste, and air into and out of the facility. Room-and-pillar mining is used to excavate horizontal drifts or panels where waste is stored. Investigators were eventually able to re-enter the repository and search for the cause of the breach. They found the source in Panel 7, Room 7, the area of active disposal at the time. Pressure and heat had burst a drum, starting a fire, damaging nearby containers, and ultimately releasing radioactive materials into the air. On activation of the radiation alarm, the underground ventilation system automatically switched to filtration mode, sending air through massive HEPA filters. Interestingly, although they’re a pretty common consumer good now, High Efficiency Particulate Air, or HEPA, filters actually got their start during the Manhattan Project specifically to filter radionuclides from the air. The ventilation system at WIPP performed well, although there was some leakage past the filters, allowing a small percentage of radioactive material to bypass the filters and release directly into the atmosphere at the surface. 21 workers tested positive for low-level exposure to radioactive contamination but, thankfully, were unharmed. Both WIPP and independent testing organizations confirmed that detected levels were very low, the particles did not spread far, and were extremely unlikely to result in radiation-related health effects to workers or the public. Thankfully, the safety features at the facility worked, but it would take investigators much longer to understand what went wrong in the first place, and that involved tracing that waste barrel back to its source. It all started at the Los Alamos National Laboratory, one of the labs created as part of the 1940s Manhattan Project that first developed atomic bombs in the desert of New Mexico. The 1970s brought a renewed interest in cleaning up various Department of Energy sites. Los Alamos was tasked with recovering plutonium from residue materials left over from previous wartime and research efforts. That process involved using nitric acid to separate plutonium from uranium. Once plutonium is extracted, you’re left with nitrate solutions that get neutralized or evaporated, creating a solid waste stream that contains residual radioactive isotopes. In 1985, a volume of this waste was placed in a lead-lined 55-gallon drum along with an absorbent to soak up any moisture and put into temporary storage at Los Alamos, where it sat for years. But in the summer of 2011, the Las Conchas wildfire threatened the Los Alamos facility, coming within just a few miles of the storage area. This actual fire lit a metaphorical fire under various officials, and wheels were set into motion to get the transuranic waste safely into a long-term storage facility. In other words, ship it down the road to WIPP. Transporting transuranic wastes on the road from one facility to another is quite an ordeal, even when they’re only going through the New Mexican desert. There are rules preventing the transportation of ignitable, corrosive, or reactive waste, and special casks are required to minimize the risk of radiological release in the unlikely event of a crash. WIPP also had rules about how waste can be packaged in order to be placed for long-term disposal called the Waste Acceptance Criteria, which included limits on free liquids. Los Alamos concluded that barrel didn’t meet the requirements and needed to be repackaged before shipping to WIPP. But, there were concerns about which absorbent to use. Los Alamos used various absorbent materials within waste barrels over the years to minimize the amount of moisture and free liquid inside. Any time you’re mixing nuclear waste with another material, you have to be sure there won’t be any unexpected reactions. The procedure for repackaging nitrate salts required that a superabsorbent polymer be used, similar to the beads I’ve used in some of my demos, but concerns about reactivity led to meetings and investigations about whether it was the right material for the job. Ultimately, Los Alamos and their contractors concluded that the materials were incompatible and decided to make a switch. In May 2012, Los Alamos published a white paper titled “Amount of Zeolite Required to Meet the Constraints Established by the EMRTC Report RF 10-13: Application of LANL Evaporator Nitrate Salts.” In other words, “How much kitty litter should be added to radioactive waste?” The answer was about 1.2 to 1, inorganic zeolite clay to nitrate salt waste, by volume. That guidance was then translated into the actual procedures that technicians would use to repackage the waste in gloveboxes at Los Alamos. But something got lost in translation. As far as investigators could determine, here’s what happened: In a meeting in May 2012, the manager responsible for glovebox operations took personal notes about this switch in materials. Those notes were sent in an email and eventually incorporated into the written procedures: “Ensure an organic absorbent is added to the waste material at a minimum of 1.5 absorbent to 1 part waste ratio.” Did you hear that? The white paper’s requirement to use an inorganic absorbent became “...an organic absorbent” in the procedures. We’ll never know where the confusion came from, but it could have been as simple as mishearing the word in the meeting. Nonetheless, that’s what the procedure became. Contractors at Los Alamos procured a large quantity of Swheat Scoop, an organic, wheat-based cat litter, and started using it to repackage the nitrate salt wastes. Our barrel first packaged in 1985 was repackaged in December 2013 with the new kitty litter. It was tested and certified in January 2014, shipped to WIPP later that month, and placed underground. And then it blew up. The unthinkable had happened; the wrong kind of kitty litter had caused a nuclear disaster. While the nitrates are relatively unreactive with inorganic, mineral-based zeolite kitty litter that should have been used, the organic, carbon-based wheat material could undergo oxidation reactions with nitrate wastes. I think it’s also interesting to note here that the issue is a reaction that was totally unrelated to the presence of transuranic waste. It was a chemical reaction - not a nuclear reaction - that caused the problem. Ultimately, the direct cause of the incident was determined to be “an exothermic reaction of incompatible materials in LANL waste drum 68660 that led to thermal runaway, which resulted in over-pressurization of the drum, breach of the drum, and release of a portion of the drum’s contents (combustible gases, waste, and wheat-based absorbent) into the WIPP underground.” Of course, the root cause is deeper than that and has to do with systemic issues at Los Alamos and how they handled the repackaging of the material. The investigation report identified 12 contributing causes that, while individually did not cause the accident, increased the likelihood or severity of it. These are written in a way that is pretty difficult for a non-DOE expert to parse: take a stab at digesting contributing cause number 5: “Failure of Los Alamos Field Office (NA-LA) and the National Transuranic (TRU) Program/Carlsbad Field Office (CBFO) to ensure that the CCP [that is, the Central Characterization Program] and LANS [that is, that is the contractor, Los Alamos National Security] complied with Resource Conservation and Recovery Act (RCRA) requirements in the WIPP Hazardous Waste Facility Permit (HWFP) and the LANL HWFP, as well as the WIPP Waste Acceptance Criteria (WAC).” Still, as bad as it all seems, it really could have been a lot worse. In a sense, WIPP performed precisely how you’d want it to in such an event, and it’s a really good thing the barrel was in the underground when it burst. Had the same happened at Los Alamos or on the way to WIPP, things could have been much worse. Thankfully, none of the other barrels packaged in the same way experienced a thermal runaway, and they were later collected and sealed in larger containers. Regardless, the consequences of the “cat-astrophe” were severe and very expensive. The cleanup involved shutting down the WIPP facility for several years and entirely replacing the ventilation system. WIPP itself didn’t formally reopen until January of 2017, nearly three full years after the incident, with the cleanup costing about half a billion dollars. Today, WIPP remains controversial, not least because of shifting timelines and public communication. Early estimates once projected closure by 2024. Now, that date is sometime between 2050 and 2085. And events like this only add fuel to the fire. Setting aside broader debates on nuclear weapons themselves, the wastes these weapons generate are dangerous now, and they will remain dangerous for generations. WIPP has even explored ideas on how to mark the site post-closure, making sure that future generations clearly understand the enduring danger. Radioactive hazards persist long after languages and societies may have changed beyond recognition, making it essential but challenging to communicate clearly about risks. Sometimes, it’s easy to forget - amidst all the technical complexity and bureaucratic red tape that surrounds anything nuclear - that it’s just people doing the work. It’s almost unbelievable that we entrust ourselves - squishy, sometimes hapless bags of water, meat, and bones - to navigate protocols of such profound complexity needed to safely take advantage of radioactive materials. I don’t tell this story because I think we should be paralyzed by the idea of using nuclear materials - there are enormous benefits to be had in many areas of science, engineering, and medicine. But there are enormous costs as well, many of which we might not be aware of if we don’t make it a habit to read obscure government investigation reports. This event is a reminder that the extent of our vigilance has to match the permanence of the hazards we create.

a month ago 37 votes
Why Are Beach Holes So Deadly?

[Note that this article is a transcript of the video embedded above.] Even though it’s a favorite vacation destination, the beach is surprisingly dangerous. Consider the lifeguard: There aren’t that many recreational activities in our lives that have explicit staff whose only job is to keep an eye on us, make sure we stay safe, and rescue us if we get into trouble. There are just a lot of hazards on the beach. Heavy waves, rip currents, heat stress, sunburn, jellyfish stings, sharks, and even algae can threaten the safety of beachgoers. But there’s a whole other hazard, this one usually self-inflicted, that usually doesn’t make the list of warnings, even though it takes, on average, 2-3 lives per year just in the United States. If you know me, you know I would never discourage that act of playing with soil and sand. It’s basically what I was put on this earth to do. But I do have one exception. Because just about every year, the news reports that someone was buried when a hole they dug collapsed on top of them. There’s no central database of sandhole collapse incidents, but from the numbers we do have, about twice as many people die this way than from shark attacks in the US. It might seem like common sense not to dig a big, unsupported hole at the beach and then go inside it, but sand has some really interesting geotechnical properties that can provide a false sense of security. So, let’s use some engineering and garage demonstrations to explain why. I’m Grady and this is Practical Engineering. In some ways, geotechnical engineering might as well be called slope engineering, because it’s a huge part of what they do. So many aspects of our built environment rely on the stability of sloped earth. Many dams are built from soil or rock fill using embankments. Roads, highways, and bridges rely on embankments to ascend or descend smoothly. Excavations for foundations, tunnels, and other structures have to be stable for the people working inside. Mines carefully monitor slopes to make sure their workers are safe. Even protecting against natural hazards like landslides requires a strong understanding of geotechnical engineering. Because of all that, the science of slope stability is really deeply understood. There’s a well-developed professional consensus around the science of soil, how it behaves, and how to design around its limitations as a construction material. And I think a peek into that world will really help us understand this hazard of digging holes on the beach. Like many parts of engineering, analyzing the stability of a slope has two basic parts: the strengths and the loads. The job of a geotechnical engineer is to compare the two. The load, in this case, is kind of obvious: it’s just the weight of the soil itself. We can complicate that a bit by adding loads at the top of a slope, called surcharges, and no doubt surcharge loads have contributed to at least a few of these dangerous collapses from people standing at the edge of a hole. But for now, let’s keep it simple with just the soil’s own weight. On a flat surface, soils are generally stable. But when you introduce a slope, the weight of the soil above can create a shear failure. These failures often happen along a circular arc, because an arc minimizes the resisting forces in the soil while maximizing the driving forces. We can manually solve for the shear forces at any point in a soil mass, but that would be a fairly tedious engineering exercise, so most slope stability analyses use software. One of the simplest methods is just to let the software draw hundreds of circular arcs that represent failure planes, compute the stresses along each plane based on the weight of the soil, and then figure out if the strength of the soil is enough to withstand the stress. But what does it really mean for a soil to have strength? If you can imagine a sample of soil floating in space, and you apply a shear stress, those particles are going to slide apart from each other in the direction of the stress. The amount of force required to do it is usually expressed as an angle, and I can show you why. You may have done this simple experiment in high school physics where you drag a block along a flat surface and measure the force required to overcome the friction. If you add weight, you increase the force between the surfaces, called the normal force, which creates additional friction. The same is true with soils. The harder you press the particles of soil together, the better they are at resisting a shear force. In a simplified force diagram, we can draw a normal force and the resulting friction, or shear strength, that results. And the angle that hypotenuse makes with the normal force is what we call the friction angle. Under certain conditions, it’s equal to the angle of repose, the steepest angle that a soil will naturally stand. If I let sand pour out of this funnel onto the table, you can see, even as the pile gets higher, the angle of the slope of the sides never really changes. And this illustrates the complexity of slope stability really nicely. Gravity is what holds the particles together, creating friction, but it’s also what pulls them apart. And the angle of repose is kind of a line between gravity’s stabilizing and destabilizing effects on the soil. But things get more complicated when you add water to the mix. Soil particles, like all things that take up space, have buoyancy. Just like lifting a weight under water is easier, soil particles seem to weigh less when they’re saturated, so they have less friction between them. I can demonstrate this pretty easily by just moving my angle of repose setup to a water tank. It’s a subtle difference, but the angle of repose has gone down underwater. It’s just because the particle’s effective weight goes down, so the shear strength of the soil mass goes down too. And this doesn’t just happen under lakes and oceans. Soil holds water - I’ve covered a lot of topics on groundwater if you want to learn more. There’s this concept of the “water table” below which, the soils are saturated, and they behave in the same way as my little demonstration. The water between the particles, called “pore water” exerts pressure, pushing them away from one another and reducing the friction between them. Shear strength usually goes down for saturated soils. But, if you’ve played with sand, you might be thinking: “This doesn’t really track with my intuitions.” When you build a sand castle, you know, the dry sand falls apart, and the wet sand holds together. So let’s dive a little deeper. Friction actually isn’t the only factor that contributes to shear strength in a soil. For example, I can try to shear this clay, and there’s some resistance there, even though there is no confining force pushing the particles together. In finer-grained soils like clay, the particles themselves have molecular-level attractions that make them, basically, sticky. The geotechnical engineers call this cohesion. And it’s where sand gets a little sneaky. Water pressure in the pores between particles can push them away from each other, but it can also do the opposite. In this demo, I have some dry sand in a container with a riser pipe to show the water table connected to the side. And I’ve dyed my water black to make it easier to see. When I pour the water into the riser, what do you think is going to happen? Will the water table in the soil be higher, lower, or exactly the same as the level in the riser? Let’s try it out. Pretty much right away, you can see what happens. The sand essentially sucks the water out of the riser, lifting it higher than the level outside the sand. If I let this settle out for a while, you can see that there’s a pretty big difference in levels, and this is largely due to capillary action. Just like a paper towel, water wicks up into the sand against the force of gravity. This capillary action actually creates negative pressure within the soil (compared to the ambient air pressure). In other words, it pulls the particles against each other, increasing the strength of the soil. It basically gives the sand cohesion, additional shear strength that doesn’t require any confining pressure. And again, if you’ve played with sand, you know there’s a sweet spot when it comes to water. Too dry, and it won’t hold together. Too wet, same thing. But if there’s just enough water, you get this strengthening effect. However, unlike clay that has real cohesion, that suction pressure can be temporary. And it’s not the only factor that makes sand tricky. The shear strength of sand also depends on how well-packed those particles are. Beach sand is usually well-consolidated because of the constant crashing waves. Let’s zoom in on that a bit. If the particles are packed together, they essentially lock together. You can see that to shear them apart doesn’t just look like a sliding motion, but also a slight expansion in volume. Engineers call this dilatancy, and you don’t need a microscope to see it. In fact, you’ve probably noticed this walking around on the beach, especially when the water table is close to the surface. Even a small amount of movement causes the sand to expand, and it’s easy to see like this because it expands above the surface of the water. The practical result of this dilatant property is that sand gets stronger as it moves, but only up to a point. Once the sand expands enough that the particles are no longer interlocked together, there’s a lot less friction between them. If you plot movement, called strain, against shear strength, you get a peak and then a sudden loss of strength. Hopefully you’re starting to see how all this material science adds up to a real problem. The shear strength of a soil, basically its ability to avoid collapse, is not an inherent property: It depends on a lot of factors; It can change pretty quickly; And this behavior is not really intuitive. Most of us don’t have a ton of experience with excavations. That’s part of the reason it’s so fun to go on the beach and dig a hole in the first place. We just don’t get to excavate that much in our everyday lives. So, at least for a lot of us, it’s just a natural instinct to do some recreational digging. You excavate a small hole. It’s fun. It’s interesting. The wet sand is holding up around the edges, so you dig deeper. Some people give up after the novelty wears off. Some get their friends or their kids involved to keep going. Eventually, the hole gets big enough that you have to get inside it to keep digging. With the suction pressure from the water and the shear strengthening through dilatancy, the walls have been holding the entire time, so there’s no reason to assume that they won’t just keep holding. But inside the surrounding sand, things are changing. Sand is permeable to water, meaning water moves through it pretty freely. It doesn’t take a big change to upset that delicate balance of wetness that gives sand its stability. The tide could be going out, lowering the water table and thus drying the soil at the surface out. Alternatively, a wave or the tide could add water to the surface sand, reducing the suction pressure. At the same time, tiny movements within the slopes are strengthening the sand as it tries to dilate in volume. But each little movement pushes toward that peak strength, after which it suddenly goes away. We call this a brittle failure because there’s little deformation to warn you that there’s going to be a collapse. It happens suddenly, and if you happen to be inside a deep hole when it does, you might be just fine, like our little friend here, but if a bigger section of the wall collapses, your chance of surviving is slim. Soil is heavy. Sand has about two-and-a-half times the density of water. It just doesn’t take that much of it to trap a person. This is not just something that happens to people on vacations, by the way. Collapsing trenches and excavations are one of the most common causes of fatal construction incidents. In fact, if you live in a country with workplace health and safety laws, it’s pretty much guaranteed that within those laws are rules about working in trenches and excavations. In the US, OSHA has a detailed set of guidelines on how to stay safe when working at the bottom of a hole, including how steep slopes can be depending on the types of soil, and the devices used to shore up an excavation to keep it from collapsing while people are inside. And for certain circumstances where the risks get high enough or the excavation doesn’t fit neatly into these simplified categories, they require a professional engineer be involved. So does all this mean that anyone who’s not an engineer just shouldn’t dig holes at the beach. If you know me, you know I would never agree with that. I don’t want to come off too earnest here, but we learn through interaction. Soil and rock mechanics are incredibly important to every part of the built environment, and I think everyone should have a chance to play with sand, to get muddy and dirty, to engage and connect and commune with the stuff on which everything gets built. So, by all means, dig holes at the beach. Just don’t dig them so deep. The typical recommendation I see is to avoid going in a hole deeper than your knees. That’s pretty conservative. If you have kids with you, it’s really not much at all. If you want to follow OSHA guidelines, you can go a little bigger: up to 20 feet (or 6 meters) in depth, as long as you slope the sides of your hole by one-and-a-half to one or about 34 degrees above horizontal. You know, ultimately you have to decide what’s safe for you and your family. My point is that this doesn’t have to be a hazard if you use a little engineering prudence. And I hope understanding some of the sneaky behaviors of beach sand can help you delight in the primitive joy of digging a big hole without putting your life at risk in the process.

2 months ago 45 votes

More in science

Why are Smokestacks So Tall?

[Note that this article is a transcript of the video embedded above.] “The big black stacks of the Illium Works of the Federal Apparatus Corporation spewed acid fumes and soot over the hundreds of men and women who were lined up before the red-brick employment office.” That’s the first line of one of my favorite short stories, written by Kurt Vonnegut in 1955. It paints a picture of a dystopian future that, thankfully, didn’t really come to be, in part because of those stacks. In some ways, air pollution is kind of a part of life. I’d love to live in a world where the systems, materials and processes that make my life possible didn’t come with any emissions, but it’s just not the case... From the time that humans discovered fire, we’ve been methodically calculating the benefits of warmth, comfort, and cooking against the disadvantages of carbon monoxide exposure and particulate matter less than 2.5 microns in diameter… Maybe not in that exact framework, but basically, since the dawn of humanity, we’ve had to deal with smoke one way or another. Since, we can’t accomplish much without putting unwanted stuff into the air, the next best thing is to manage how and where it happens to try and minimize its impact on public health. Of course, any time you have a balancing act between technical issues, the engineers get involved, not so much to help decide where to draw the line, but to develop systems that can stay below it. And that’s where the smokestack comes in. Its function probably seems obvious; you might have a chimney in your house that does a similar job. But I want to give you a peek behind the curtain into the Illium Works of the Federal Apparatus Corporation of today and show you what goes into engineering one of these stacks at a large industrial facility. I’m Grady, and this is Practical Engineering. We put a lot of bad stuff in the air, and in a lot of different ways. There are roughly 200 regulated hazardous air pollutants in the United States, many with names I can barely pronounce. In many cases, the industries that would release these contaminants are required to deal with them at the source. A wide range of control technologies are put into place to clean dangerous pollutants from the air before it’s released into the environment. One example is coal-fired power plants. Coal, in particular, releases a plethora of pollutants when combusted, so, in many countries, modern plants are required to install control systems. Catalytic reactors remove nitrous oxides. Electrostatic precipitators collect particulates. Scrubbers use lime (the mineral, not the fruit) to strip away sulfur dioxide. And I could go on. In some cases, emission control systems can represent a significant proportion of the costs involved in building and operating a plant. But these primary emission controls aren’t always feasible for every pollutant, at least not for 100 percent removal. There’s a very old saying that “the solution to pollution is dilution.” It’s not really true on a global scale. Case in point: There’s no way to dilute the concentration of carbon dioxide in the atmosphere, or rather, it’s already as dilute as it’s going to get. But, it can be true on a local scale. Many pollutants that affect human health and the environment are short-lived; they chemically react or decompose in the atmosphere over time instead of accumulating indefinitely. And, for a lot of chemicals, there are concentration thresholds below which the consequences on human health are negligible. In those cases, dilution, or really dispersion, is a sound strategy to reduce their negative impacts, and so, in some cases, that’s what we do, particularly at major point sources like factories and power plants. One of the tricks to dispersion is that many plumes are naturally buoyant. Naturally, I’m going to use my pizza oven to demonstrate this. Not all, but most pollutants we care about are a result of combustion; burning stuff up. So the plume is usually hot. We know hot air is less dense, so it naturally rises. And the hotter it is, the faster that happens. You can see when I first start the fire, there’s not much air movement. But as the fire gets hotter in the oven, the plume speeds up, ultimately rising higher into the air. That’s the whole goal: get the plume high above populated areas where the pollutants can be dispersed to a minimally-harmful concentration. It sounds like a simple solution - just run our boilers and furnaces super hot to get enough buoyancy for the combustion products to disperse. The problem with the solution is that the whole reason we combust things is usually to recover the heat. So if you’re sending a lot of that heat out of the system, just because it makes the plume disperse better, you’re losing thermodynamic efficiency. It’s wasteful. That’s where the stack comes in. Let me put mine on and show you what I mean. I took some readings with the anemometers with the stack on and off. The airspeed with the stack on was around double with it off. About a meter per second compared with two. But it’s a little tougher to understand why. It’s intuitive that as you move higher in a column of fluid, the pressure goes down (since there’s less weight of the fluid above). The deeper you dive in a pool, the more pressure you feel. The higher you fly in a plane or climb a mountain, the lower the pressure. The slope of that line is proportional to a fluid’s density. You don’t feel much of a pressure difference climbing a set of stairs because air isn’t very dense. If you travel the same distance in water, you’ll definitely notice the difference. So let’s look at two columns of fluid. One is the ambient air and the other is the air inside a stack. Since it’s hotter, the air inside the stack is less dense. Both columns start at the same pressure at the bottom, but the higher you go, the more the pressure diverges. It’s kind of like deep sea diving in reverse. In water, the deeper you go into the dense water, the greater the pressure you feel. In a stack, the higher you are in a column of hot air, the more buoyant you feel compared to the outside air. This is the genius of a smoke stack. It creates this difference in pressure between the inside and outside that drives greater airflow for a given temperature. Here’s the basic equation for a stack effect. I like to look at equations like this divided into what we can control and what we can’t. We don’t get to adjust the atmospheric pressure, the outside temperature, and this is just a constant. But you can see, with a stack, an engineer now has two knobs to turn: the temperature of the gas inside and the height of the stack. I did my best to keep the temperature constant in my pizza oven and took some airspeed readings. First with no stack. Then with the stock stack. Then with a megastack. By the way, this melted my anemometer; should have seen that coming. Thankfully, I got the measurements before it melted. My megastack nearly doubled the airspeed again at around three-and-a-half meters per second versus the two with just the stack that came with the oven. There’s something really satisfying about this stack effect to me. No moving parts or fancy machinery. Just put a longer pipe and you’ve fundamentally changed the physics of the whole situation. And it’s a really important tool in the environmental engineer’s toolbox to increase airflow upward, allowing contaminants to flow higher into the atmosphere where they can disperse. But this is not particularly revolutionary… unless you’re talking about the Industrial Revolution. When you look at all the pictures of the factories in the 19th century, those stacks weren’t there to improve air quality, if you can believe it. The increased airflow generated by a stack just created more efficient combustion for the boilers and furnaces. Any benefits to air quality in the cities were secondary. With the advent of diesel and electric motors, we could use forced drafts, reducing the need for a tall stack to increase airflow. That was kind of the decline of the forests of industrial chimneys that marked the landscape in the 19th century. But they’re obviously not all gone, because that secondary benefit of air quality turned into the primary benefit as environmental rules about air pollution became stricter. Of course, there are some practical limits that aren’t taken into account by that equation I showed. The plume cools down as it moves up the stack to the outside, so its density isn’t constant all the way up. I let my fire die down a bit so it wouldn’t melt the thermometer (learned my lesson), and then took readings inside the oven and at the top of the stack. You can see my pizza oven flue gas is around 210 degrees at the top of the mega-stack, but it’s roughly 250 inside the oven. After the success of the mega stack on my pizza oven, I tried the super-mega stack with not much improvement in airflow: about 4 meters per second. The warm air just got too cool by the time it reached the top. And I suspect that frictional drag in the longer pipe also contributed to that as well. So, really, depending on how insulating your stack is, our graph of height versus pressure actually ends up looking like this. And this can be its own engineering challenge. Maybe you’ve gotten back drafts in your fireplace at home because the fire wasn’t big or hot enough to create that large difference in pressure. You can see there are a lot of factors at play in designing these structures, but so far, all we’ve done is get the air moving faster. But that’s not the end goal. The purpose is to reduce the concentration of pollutants that we’re exposed to. So engineers also have to consider what happens to the plume once it leaves the stack, and that’s where things really get complicated. In the US, we have National Ambient Air Quality Standards that regulate six so-called “criteria” pollutants that are relatively widespread: carbon monoxide, lead, nitrogen dioxide, ozone, particulates, and sulfur dioxide. We have hard limits on all these compounds with the intention that they are met at all times, in all locations, under all conditions. Unfortunately, that’s not always the case. You can go on EPA’s website and look at the so-called “non-attainment” areas for the various pollutants. But we do strive to meet the standards through a list of measures that is too long to go into here. And that is not an easy thing to do. Not every source of pollution comes out of a big stationary smokestack where it’s easy to measure and control. Cars, buses, planes, trucks, trains, and even rockets create lots of contaminants that vary by location, season, and time of day. And there are natural processes that contribute as well. Forests and soil microbes release volatile organic compounds that can lead to ozone formation. Volcanic eruptions and wildfires release carbon monoxide and sulfur dioxide. Even dust storms put particulates in the air that can travel across continents. And hopefully you’re seeing the challenge of designing a smoke stack. The primary controls like scrubbers and precipitators get most of the pollutants out, and hopefully all of the ones that can’t be dispersed. But what’s left over and released has to avoid pushing concentrations above the standards. That design has to work within the very complicated and varying context of air chemistry and atmospheric conditions that a designer has no control over. Let me show you a demo. I have a little fog generator set up in my garage with a small fan simulating the wind. This isn’t a great example because the airflow from the fan is pretty turbulent compared to natural winds. You occasionally get some fog at the surface, but you can see my plume mainly stays above the surface, dispersing as it moves with the wind. But watch what happens when I put a building downstream. The structure changes the airflow, creating a downwash effect and pulling my plume with it. Much more frequently you see the fog at the ground level downstream. And this is just a tiny example of how complex the behavior of these plumes can be. Luckily, there’s a whole field of engineering to characterize it. There are really just two major transport processes for air pollution. Advection describes how contaminants are carried along by the wind. Diffusion describes how those contaminants spread out through turbulence. Gravity also affects air pollution, but it doesn’t have a significant effect except on heavier-than-air particulates. With some math and simplifications of those two processes, you can do a reasonable job predicting the concentration of any pollutant at any point in space as it moves and disperses through the air. Here’s the basic equation for that, and if you’ll join me for the next 2 hours, we’ll derive this and learn the meaning of each term… Actually, it might take longer than that, so let’s just look at a graphic. You can see that as the plume gets carried along by the wind, it spreads out in what’s basically a bell curve, or gaussian distribution, in the planes perpendicular to the wind direction. But even that is a bit too simplified to make any good decisions with, especially when the consequences of getting it wrong are to public health. A big reason for that is atmospheric stability. And this can make things even more complicated, but I want to explain the basics, because the effect on plumes of gas can be really dramatic. You probably know that air expands as it moves upward; there’s less pressure as you go up because there is less air above you. And as any gas expands, it cools down. So there’s this relationship between height and temperature we call the adiabatic lapse rate. It’s about 10 degrees Celsius for every kilometer up or about 28 Fahrenheit for every mile up. But the actual atmosphere doesn’t always follow this relationship. For example, rising air parcels can cool more slowly than the surrounding air. This makes them warmer and less dense, so they keep rising, promoting vertical motion in a positive feedback loop called atmospheric instability. You can even get a temperature inversion where you have cooler air below warmer air, something that can happen in the early morning when the ground is cold. And as the environmental lapse rate varies from the adiabatic lapse rate, the plumes from stacks change. In stable conditions, you usually get a coning plume, similar to what our gaussian distribution from before predicts. In unstable conditions, you get a lot of mixing, which leads to a looping plume. And things really get weird for temperature inversions because they basically act like lids for vertical movement. You can get a fanning plume that rises to a point, but then only spreads horizontally. You can also get a trapping plume, where the air gets stuck between two inversions. You can have a lofting plume, where the air is above the inversion with stable conditions below and unstable conditions above. And worst of all, you can have a fumigating plume when there are unstable conditions below an inversion, trapping and mixing the plume toward the ground surface. And if you pay attention to smokestacks, fires, and other types of emissions, you can identify these different types of plumes pretty easily. Hopefully you’re seeing now how much goes into this. Engineers have to keep track of the advection and diffusion, wind speed and direction, atmospheric stability, the effects of terrain and buildings on all those factors, plus the pre-existing concentrations of all the criteria pollutants from other sources, which vary in time and place. All that to demonstrate that your new source of air pollution is not going to push the concentrations at any place, at any time, under any conditions, beyond what the standards allow. That’s a tall order, even for someone who loves gaussian distributions. And often the answer to that tall order is an even taller smokestack. But to make sure, we use software. The EPA has developed models that can take all these factors into account to simulate, essentially, what would happen if you put a new source of pollution into the world and at what height. So why are smokestacks so tall? I hope you’ll agree with me that it turns out to be a pretty complicated question. And it’s important, right? These stacks are expensive to build and maintain. Those costs trickle down to us through the costs of the products and services we buy. They have a generally negative visual impact on the landscape. And they have a lot of other engineering challenges too, like resonance in the wind. And on the other hand, we have public health, arguably one of the most critical design criteria that can exist for an engineer. It’s really important to get this right. I think our air quality regulations do a lot to make sure we strike a good balance here. There are even rules limiting how much credit you can get for building a stack higher for greater dispersion to make sure that we’re not using excessively tall stacks in lieu of more effective, but often more expensive, emission controls and strategies. In a perfect world, none of the materials or industrial processes that we rely on would generate concentrated plumes of hazardous gases. We don’t live in that perfect world, but we are pretty fortunate that, at least in many places on Earth, air quality is something we don’t have to think too much about. And to thank for it, we have a relatively small industry of environmental professionals who do think about it, a whole lot. You know, for a lot of people, this is their whole career; what they ponder from 9-5 every day. Something most of us would rather keep out of mind, they face it head-on, developing engineering theories, professional consensus, sensible regulations, modeling software, and more - just so we can breathe easy.

19 hours ago 3 votes
AI Therapists

In the movie Blade Runner 2049 (an excellent film I highly recommend), Ryan Gosling’s character, K, has an AI “wife”, Joi, played by Ana de Armas. K is clearly in love with Joi, who is nothing but software and holograms. In one poignant scene, K is viewing a giant ad for AI companions and sees […] The post AI Therapists first appeared on NeuroLogica Blog.

22 hours ago 2 votes
The beauty of concrete

Why are buildings today austere, while buildings of the past were ornate and elaborately ornamented?

18 hours ago 1 votes
What does Innovaccer actually do? A look under the hood | Out-Of-Pocket

A conversation about EHRs, who their customers actually are, and building apps

yesterday 2 votes
Cambodian Forest Defenders at Risk for Exposing Illegal Logging

The lush forests that have long sustained Cambodia’s Indigenous people have steadily fallen to illicit logging. Now, community members face intimidation and risk arrest as they patrol their forests to document the losses and try to push the government to stop the cutting. Read more on E360 →

2 days ago 2 votes