More from Blog - Practical Engineering
[Note that this article is a transcript of the video embedded above.] Flaming Gorge Dam rises from the Green River in northern Utah like a concrete wedge driven into the canyon, anchored against the sheer rock walls that flank it. It’s quintessential, in a way. It’s what we picture when we think about dams: a hulking, but also somehow graceful, wall of concrete stretching across a narrow rocky valley. But to dam engineers, there’s nothing quintessential about it. So-called arch dams are actually pretty rare. For reference, the US has about 92,000 dams listed in the national inventory. I couldn’t find an exact number, but based on a little bit of research, I estimate that we have maybe around 50 arch dams - it’s less than a tenth of a percent. The only reason we think of arch dams as archetypal is because they’re so huge. I counted 11 in the US that have their own visitor center. There just aren’t that many works of infrastructure that double as tourist destinations, and the reason for it is, I think, kind of interesting. Because an arch dam isn’t just an engineering solution to holding back water, and it’s not just a solution to holding back a lot of water. It’s all about height, and I built a little demo to show you what I mean. I’m Grady, and this is Practical Engineering. Engineers love categories, and dams are no exception. You can group them in a lot of ways, but mostly, we care about how they handle the incredible force of water they hold back. Embankment dams do it with earth or rock, relying on friction between the individual particles that make up the structure. Gravity dams do it with weight. Let me show you an example. I have my tried and trusted acrylic flume with a small plastic dam. Once this is all set up, I can start filling up the reservoir. This little dam is a little narrower than the flume. It doesn’t touch the sides, so it leaks a bit. The reason for that will be clear in a moment. And hopefully you can see what’s about to happen. This gravity dam doesn’t have much gravity in it, so it doesn’t take much water at all before you get a failure. I’m counting failure as the first sign of movement, by the way. That’s when the stabilizing forces are overcome by the destabilizing ones. And the little dam by itself could hold until my reservoir was about a quarter of the way to the top. Gravity dams get their stability against sliding from… you guessed it… friction. Bet you thought I was going to say gravity. And actually, it kind of is gravity, since frictional resistance is a function of just two variables: the normal force (in other words, the weight of the structure) and a coefficient that depends on the two materials touching. Engineers analyze the stability of gravity dams in cross-section, essentially taking a small slice of the structure. You want every slice to be able to support itself. That’s why I didn’t want the demo touching the sides of the flume; it would add resistance that doesn’t actually exist in a cross-section. The destabilizing force is hydrostatic pressure from the reservoir, which increases with depth. And the stabilizing force is friction. There are some complexities to this that we’ll get into, but very generally, as long as you have more friction than pressure, you’re good; you have a stable structure. So let’s add some normal force to the demo and see what happens. [Beat] You can see my little reservoir gets a little higher before the dam fails, about halfway to the top. And we can try it again with more weight. But the result gets a little more interesting… the dam didn’t actually slide this time, but it still failed. Turns out gravity dams have two major failure modes: sliding and overturning. Resistance to sliding comes from friction, which really doesn’t depend on how the weight of the dam is distributed. That’s not true for overturning failures. Let’s look back at our cross-section. For a unit width of dam, the hydrostatic pressure from the reservoir looks like this. Pressure increases with depth. And the area under this line is the total force pushing the dam downstream. We can simplify that distribution and treat it like it’s a single force, and it turns out when you do that, the force acts a third of the way up the total depth of water. Most dams want to rotate about the downstream toe, so you have a destabilizing force offset from the point of rotation. In other words, you have a torque, also called a moment. The dam has to create an opposite moment around that point to remain stable. Moment or torque is calculated as the force multiplied by its perpendicular distance from the point of rotation. So, the further the center of mass is from the downstream toe, the more stable the structure is, and the demo shows it too. Here’s where we left the weights the last time, and let’s see it happen again. The reservoir makes it about two-thirds of the way up the walls before the dam overturns. Let’s make a simple shift. Just move the weights further upstream and try again. It’s not a big difference. The reservoir reaches about three-quarters the way up before we see a sliding failure, but shifting the weights did increase the stability. And this is why a lot of gravity dams have a fairly consistent shape, with most of the weight concentrated on the upstream side, and usually a sloped or stepped downstream face. Interestingly, you can use the force of water against itself in a way. Watch what happens when I turn my little model around. Now the hydrostatic pressure applies both a destabilizing and stabilizing force, so you get more resistance for a given depth. A lot of deployable temporary storm barriers and cofferdam systems take advantage of this kind of configuration. You can imagine if I extended the base even further, I could create a structure that was self-stable just from its geometry alone. The weight of the water on the footing would overcome the lateral pressure. But there’s a catch to this. This is fully stable now, but watch what happens when I give the dam just a bit of a tilt. All of a sudden, it’s no longer stable. This might seem kind of intuitive, but I think it’s important to explain what’s actually going on. Hydrostatic pressure from the reservoir doesn’t only act on the face of a dam. With smooth plastic on smooth plastic, you get a pretty nice seal, but as soon as even a tiny gap opens, water gets underneath. Now there’s upward pressure on the bottom of the dam as well. If you’re depending on the downward force of a dam from its weight for stability, it’s easy to see why an upward force is a bad thing. And it’s so dramatic in the example with the upstream footing specifically. In that case, the downward pressure of the reservoir is acting as a stabilizing force, but if water can get underneath that footing, it basically cancels out. The pressure on the bottom is the same as the pressure on the top. But this isn’t only an issue in that case. The ground isn’t waterproof. In fact, I’ve done a video all about the topic. Soil and rock works more like a sponge than a solid material, and water can flow through them. That’s how we get aquifers and wells and springs and such. But it’s a problem for gravity dams, because water can seep below the structure and apply pressure to the bottom, essentially counteracting its weight. We call it uplift. Looking back at the cross-section, we can estimate this. Of course, you have the triangular pressure distribution along the upstream face. But at this point you have the full hydrostatic pressure also pushing upward. And at the downstream toe, you have no pressure (it’s exposed to the atmosphere). So, now you have a pressure distribution below the dam that looks like this. Of course, this part can get a lot more complicated since most dams don’t sit flush with the ground, and many are equipped with drains and cutoff walls, so definitely go check that other video out if you want to learn more. But let me show you the issue this causes with some recreational math on our cross-sectional slice of the dam. The taller the dam, the greater the uplift force. That happens linearly. In other words, the force is proportional to the depth of the reservoir. But look at the lateral force. Again, remember it’s the area under this triangle. Maybe you remember that formula: one-half times base times height. Well, the height is the depth of the water. And the base is also a function of the depth. More specifically, it’s the unit weight of water times depth. Multiply it together, and you see the challenge: the force increases as a function of the depth squared. So for every unit of additional height you want out of a gravity dam, you need significantly more weight to resist the forces, which means more material and thus a lot more cost. Hopefully all this exposition is starting to reveal a solution to this rapid divergence of stability and loads as a reservoir increases in height. Dams don’t actually float in space like my demonstration and graphics show. You know, by necessity, they extend across the entire valley and usually key into the abutments on either side. Naturally, that connection at the sides is going to offer some resistance to the forces dams need to withstand. And if you can count on that resistance, you can significantly lower the mass, and thus the cost, of the structure. But, again, this gets complicated. Let’s go back to the demo. Now I’m going to replace my gravity dam with something much simpler. Just a sheet of aluminum flashing, and, to simulate that resistance provided by socketing the structure into the earth, I’ve taped it to the bottom and sides… with some difficulty, actually. When I fill up the reservoir with water, it holds just fine. There’s a little leaking past my subpar tape job, but this is a fully stable structure. And I think the comparison here is pretty stark. When you can develop resistance from the sides you can get away with a lot less dam. But it’s harder than you might think to do that. For one, the natural soil or rock at a dam site might not be all that strong. The banks of rivers aren’t generally known for their stability, so the prospect of transferring enormous amounts of force into them rarely makes a lot of engineering sense. But the other challenge is in the dam itself. Take a look back at this demo. See how my dam is bending behind the force of the water. It’s holding there, but, you know, we don’t actually build dams out of aluminum flashing. Resisting loads in this way basically treats the dam like a beam, like a sideways bridge girder. Except, unlike girder bridges that usually only span up to a few hundred feet, dams are often much longer. Even the stiffest modern materials, like prestressed concrete boxes, would just deflect too much under load to transfer all the hydrostatic pressure across a valley into the abutments. Plus we usually don’t like to rely on steel too much in dams because of issues with corrosion and longevity. So where a typical beam experiences both tensile and compressive stress on opposite sides, we really need to transfer all that load, creating only compressive stress in the material. I’m sure you see where I’m going with this. How have we been building bridges for ages from materials like masonry where tensile stress isn’t an option? It’s arches! The arch is a special shape in engineering because you can transfer loads by putting the material in compression only, allowing for simpler, cheaper, and longer-lasting materials like masonry and concrete. You basically co-opt the geology for support, reducing the need for a massive structure. For completeness’s sake, let me show you how it works in the demo. I’ve formed a little arch from my thin sheet of aluminum. Now when I fill up the reservoir, there’s no deflection like the previous example. And again, side by side, it’s easy to see the benefits here. You get a lot more efficiency out of your materials than you do with an earthen embankment dam or a gravity structure. Of course, there are some drawbacks here. For one, arches create horizontal forces at the supports called thrusts that have to be resisted. Sites that use this design really require strong, competent rock in the abutments to withstand the enormous loads. And just like with bridges, the span matters. The wider the valley, the bigger the arch needs to be, so these dams generally only make sense in deep gorges and steep, narrow canyons. The engineering is a lot more complicated, too. You can’t use a simple 2D cross-section to demonstrate stability. The structural behavior is inherently three-dimensional, which is tougher to characterize, especially when you consider unusual conditions like earthquakes and temperature effects. And since they’re lighter, arch dams don’t resist uplift forces very well, making foundation drainage systems more critical. All this means that it’s really only a solution that makes economic sense in a narrow range of circumstances, one of the most important being height. For smaller dams, the additional complexity and expense of designing and building an arch aren’t justified by the structural efficiency. Gravity and embankment dams are much more adaptable to a wider range of site conditions. And there are other types of dams, too, that blend these ideas. Multiple-arch dams use a series of smaller arches supported by buttresses, dividing the span into more manageable components. Even what is perhaps the most famous arch dam in the world - Hoover Dam - isn’t a pure arch structure. Technically, it’s a gravity-arch dam, meaning it resists part of the water load through mass while also distributing the forces into the canyon through arch action. The proportions are carefully balanced to take advantage of the unique site conditions and relatively wider canyon than most arch dams are built in. And so, when you look at the tallest dams on Earth, one structural form dominates. By my estimation, around 40 percent of the tallest 200 dams in the world incorporate an arch into their design. There aren’t that many places where it makes sense, but when you compare what it takes to hold a reservoir back in a narrow canyon valley, I think the case for arches is pretty clear.
[Note that this article is a transcript of the video embedded above.] In the early 1900s, Seattle was a growing city hemmed in by geography. To the west was Puget Sound, a vital link to the Pacific Ocean. To the east, Lake Washington stood between the city and the farmland and logging towns of the Cascades. As the population grew, pressure mounted for a reliable east–west transportation route. But Lake Washington wasn’t easy to cross. Carved by glaciers, the lake is deceptively deep, over 200 feet or 60 meters in some places. And under that deep water sits an even deeper problem: a hundred-foot layer of soft clay and mud. Building bridge piers all the way to solid ground would have required staggeringly sized supports. The cost and complexity made it infeasible to even consider. But in 1921, an engineer named Homer Hadley proposed something radical: a bridge that didn’t rest on the bottom at all. Instead, it would float on massive hollow concrete pontoons, riding on the surface like a ship. It took nearly two decades for his idea to gain traction, but with the New Deal and Public Works Administration, new possibilities for transportation routes across the country began to open up. Federal funds flowed, and construction finally began on what would become the Lacey V. Murrow Bridge. When it opened in 1940, it was the first floating concrete highway of its kind, a marvel of engineering and a symbol of ingenuity under constraint. But floating bridges, by their nature, carry some unique vulnerabilities. And fifty years later, this span would be swallowed by the very lake it crossed. Between that time and since, the Seattle area has kind of become the floating concrete highway capital of the world. That’s not an official designation, at least not yet, but there aren’t that many of these structures around the globe. And four of the five longest ones on Earth are clustered in one small area of Washington state. You have Hood Canal, Evergreen Point, Lacey V Murrow, and its neighbor, the Homer M. Hadley Memorial Bridge, named for the engineer who floated the idea in the first place. Washington has had some high-profile failures, but also some remarkable successes, including a test for light rail transit over a floating bridge just last month in June 2025. It's a niche branch of engineering, full of creative solutions and unexpected stories. So I want to take you on a little tour of the hidden engineering behind them. I’m Grady, and this is Practical Engineering. Floating bridges are basically as old as recorded history. It’s not a complicated idea: place pontoons across a body of water, then span them with a deck. For thousands of years, this straightforward solution has provided a fast and efficient way to cross rivers and lakes, particularly in cases where permanent bridges were impractical or when the need for a crossing was urgent. In fact, floating bridges have been most widely used in military applications, going all the way back to Xerxes crossing the Dardanelles in 480 BCE. They can be made portable, quick to erect, flexible to a wide variety of situations, and they generally don’t require a lot of heavy equipment. There are countless designs that have been used worldwide in various military engagements. But most floating bridges, both ancient and modern, weren’t meant to last. They’re quick to put up, but also quick to take out, either on purpose or by Mother Nature. They provide the means to get in, get across, and get out. So they aren’t usually designed for extreme conditions. Transitioning from temporary military crossings to permanent infrastructure was a massive leap, and it brought with it a host of engineering challenges. An obvious one is navigation. A bridge that floats on the surface of the water is, by default, a barrier to boats. So, permanent floating bridges need to make room for maritime traffic. Designers have solved this in several ways, and Washington State offers a few good case studies. The Evergreen Point Floating Bridge includes elevated approach spans on either end, allowing ships to pass beneath before the road descends to water level. The original Lacey V. Murrow Bridge took a different approach. Near its center, a retractable span could be pulled into a pocket formed by adjacent pontoons, opening a navigable channel. But, not only did the movable span create interruptions to vehicle traffic on this busy highway, it also created awkward roadway curves that caused frequent accidents. The mechanism was eventually removed after the East Channel Bridge was replaced to increase its vertical clearance, providing boats with an alternative route between the two sides of Lake Washington. Further west, the Hood Canal Bridge incorporates truss spans for smaller craft. And it has hydraulic lift sections for larger ships. The US Naval Base Kitsap is not far away, so sometimes the bridge even has to open for Navy submarines. These movable spans can raise vertically above the pontoons, while adjacent bridge segments slide back underneath. The system is flexible: one side can be opened for tall but narrow vessels, or both for wider ships. But floating bridges don’t just have to make room for boats. In a sense, they are boats. Many historical spans literally floated on boats lashed together. And that comes with its own complications. Unlike fixed structures, floating bridges are constantly interacting with water: waves, currents, and sometimes even tides and ice. They’re easiest to implement on calm lakes or rivers with minimal flooding, but water is water, and it’s a totally different type of engineering when you’re not counting on firm ground to keep things in place. We don’t just stretch floating bridges across the banks and hope for the best. They’re actually moored in place, usually by long cables and anchors, to keep materials from overstressing and to prevent movements that would make the roadway uncomfortable or dangerous. Some anchors use massive concrete slabs placed on the lakebed. Others are tied to piles driven deep into the ground. In particularly deep water or soft soil, anchors are lowered to the bottom with water hoses that jet soil away, allowing the anchor to sink deep into the mud. These anchoring systems do double duty, providing both structural integrity and day-to-day safety for drivers, but even with them, floating bridges have some unique challenges. They naturally sit low to the water, which means that in high winds, waves can crash directly onto the roadway, obscuring the visibility and creating serious risks to road users. Motion from waves and wind can also cause the bridge to flex and shift beneath vehicles, especially unnerving for drivers unused to the sensation. In Washington State, all the major floating bridges have been closed at various times due to weather. The DOT enforces wind thresholds for each bridge; if the wind exceeds the threshold, the bridge is closed to traffic. Even if the bridge is structurally sound, these closures reflect the reality that in extreme weather, the bridge itself becomes part of the storm. But we still haven’t addressed the floating elephant in the pool here: the concrete pontoons themselves. Floating bridges have traditionally been made of wood or inflatable rubber, which makes sense if you’re trying to stay light and portable. But permanent infrastructure demands something more durable. It might seem counterintuitive to build a buoyant structure out of concrete, but it’s not as crazy as it sounds. In fact, civil engineering students compete every year in concrete canoe races hosted by the American Society of Civil Engineers. Actually, I was doing a little recreational math to find a way to make this intuitive, and I stumbled upon a fun little fact. If you want to build a neutrally buoyant, hollow concrete cube, there’s a neat rule of thumb you can use. Just take the wall thickness in inches, and that’s your outer dimension in feet. Want 12-inch-thick concrete walls? You’ll need a roughly 12-foot cube. This is only fun because of the imperial system, obviously. It’s less exciting to say that the two dimensions have a roughly linear relationship with a factor of 12. And I guess it’s not really that useful except that it helps to visualize just how feasible it is to make concrete float. Of course, real pontoons have to do more than just barely float themselves. They have to carry the weight of a deck and whatever crosses it with an acceptable margin of safety. That means they’re built much larger than a neutrally buoyant box. But mass isn’t the only issue. Concrete is a reliable material and if you’ve watched the channel for a while, you know that there are a few things you can count on concrete to do, and one of them is to crack. Usually not a big deal for a lot of structures, but that’s a pretty big problem if you’re trying to keep water out of a pontoon. Designers put enormous effort into preventing leaks. Modern pontoons are subdivided into sealed chambers. Watertight doors are installed between the chambers so they can still be accessed and inspected. Leak detection systems provide early warnings if anything goes wrong. And piping is pre-installed with pumps on standby, so if a leak develops, the chambers can be pumped dry before disaster strikes. The concrete recipe itself gets extra attention. Specialized mixes reduce shrinkage, improve water resistance, and resist abrasion. Even temperature control during curing matters. For the replacement of the Evergreen Point Bridge, contractors embedded heating pipes in the base slabs of the pontoons, allowing them to match the temperature of the walls as they were cast. This enabled the entire structure to cool down at a uniform rate, reducing thermal stresses that could lead to cracking. There were also errors during construction, though. A flaw in the post-tensioning system led to millions of dollars in change orders halfway through construction and delayed the project significantly while they worked out a repair. But there’s a good reason why they were so careful to get the designs right on that project. Of the four floating bridges in Washington state, two of them have sunk. In February 1979, a severe storm caused the western half of the Hood Canal Bridge to lose its buoyancy. Investigations revealed that open hatches allowed rain and waves to blow in, slowly filling the pontoons and ultimately leading to the western half of the bridge sinking. The DOT had to establish a temporary ferry service across the canal for nearly four years while the western span was rebuilt. Then, in 1990, it happened again. This time, the failure occurred during rehabilitation work on the Lacey V. Murrow Bridge while it was closed. Contractors were using hydrodemolition, high-pressure water jets, to remove old concrete from the road deck. Because the water was considered contaminated, it had to be stored rather than released into Lake Washington. Engineers calculated that the pontoon chambers could hold the runoff safely. To accommodate that, they removed the watertight doors that normally separated the internal compartments. But, when a storm hit over Thanksgiving weekend, water flooded into the open chambers. The bridge partially sank, severing cables on the adjacent Hadley Bridge and delaying the project by more than a year - a potent reminder that even small design or operational oversights can have major consequences on this type of structure. And we still have a lot to learn. Recently, Sound Transit began testing light rail trains on the Homer Hadley Bridge, introducing a whole new set of engineering puzzles. One is electricity. With power running through the rails, there was concern about stray currents damaging the bridge. To prevent this, the track is mounted on insulated blocks, with drip caps to prevent water from creating a conductive path. And then there’s the bridge movement. Unlike typical bridges, a floating bridge can roll, pitch, and yaw with weather, lake level, and traffic loads. The joints between the fixed shoreline and the bridge have to be able to accommodate movement. It’s usually not an issue for cars, trucks, bikes, or pedestrians, but trains require very precise track alignment. Engineers had to develop an innovative “track bridge” system. It uses specialized bearings to distribute every kind of movement over a longer distance, keeping tracks aligned even as the floating structure shifts beneath it. Testing in June went well, but there’s more to be done before you can ride the Link light rail across a floating highway. If floating bridges are the present, floating tunnels might be the future. I talked about immersed tube tunnels in a previous video. They’re used around the world, made by lowering precast sections to the seafloor and connecting them underwater. But what if, instead of resting on the bottom, those tunnels floated in the water column? It should be possible to suspend a tunnel with negative buoyancy using surface pontoons or even tether one with positive buoyancy to the bottom using anchors. In deep water, this could dramatically shorten tunnel lengths, reduce excavation costs, and minimize environmental impacts. Norway has actually proposed such a tunnel across a fjord on its western coast, a project that, if realized, would be the first of its kind. Like floating bridges before it, this tunnel will face a long list of unknowns. But that’s the essence of engineering: meeting each challenge with solutions tailored to a specific place and need. There aren’t many locations where floating infrastructure makes sense. The conditions have to be just right - calm waters, minimal ice, manageable tides. But where the conditions do allow, floating bridges and their hopefully future descendants open up new possibilities for connection, mobility, and engineering.
[Note that this article is a transcript of the video embedded above.] There’s a new trend in high-rise building design. Maybe you’ve seen this in your city. The best lots are all taken, so developers are stretching the limits to make use of space that isn’t always ideal for skyscrapers. They’re not necessarily taller than buildings of the past, but they are a lot more slender. “Pencil tower” is the term generally used to describe buildings that have a slenderness ratio of more than around 10 to 1, height to width. A lot of popular discussion around skyscrapers is about how tall we can build them. Eventually, you can get so tall that there are no materials strong enough to support the weight. But, pencil towers are the perfect case study in why strength isn’t the only design criterion used in structural engineering. Of course, we don’t want our buildings to fall down, but there’s other stuff we don’t want them to do, too, including flex and sway in the wind. In engineering, this concept is called the serviceability limit state, and it’s an entirely separate consideration from strength. Even if moderate loads don’t cause a structure to fail, the movement they cause can lead to windows breaking, tiles cracking, accelerated fatigue of the structure, and, of course, people on the top floors losing their lunch from disorientation and discomfort. So, limiting wind-induced motions is a major part of high-rise design and, in fact, can be such a driving factor in the engineering of the building that strength is a secondary consideration. Making a building stiffer is the obvious solution. But adding stiffness requires larger columns and beams, and those subtract valuable space within the building itself. Another option is to augment a building’s aerodynamic performance, reducing the loads that winds impose. But that too can compromise the expensive floorspace within. So many engineers are relying on another creative way to limit the vibrations of tall buildings. And of course, I built a model in the garage to show you how this works. I’m Grady, and this is Practical Engineering. One of the very first topics I ever covered on this channel was tuned mass dampers. These are mechanisms that use a large, solid mass to counteract motion in all kinds of structures, dissipating the energy through friction or hydraulics, like the shock absorbers in vehicles. Probably the most famous of these is in the Taipei 101 building. At the top of the tower is a massive steel pendulum, and instead of hiding it away in a mechanical floor, they opened it to visitors, even giving the damper its own mascot. But, mass dampers have a major limitation because of those mechanical parts. The complex springs, dampers, and bearings need regular maintenance, and they are custom-built. That gets pretty expensive. So, what if we could simplify the device? This is my garage-built high-rise. It’s not going to hold many conference room meetings, but it does do a good job swaying from side to side, just like an actual skyscraper. And I built a little tank to go on top here. The technical name for this tank is a tuned liquid column damper, and I can show you how it works. Let’s try it with no water first. Using my digitally calibrated finger, I push the tower over by a prescribed distance, and you can see this would not be a very fun ride. There is some natural damping, but the oscillation goes on for quite a while before the motion stops. Now, let’s put some water in the tank. With the power of movie magic, I can put these side by side so you can really get a sense of the difference. By the way, nearly all of the parts for this demonstration were provided by my friends at Send-Cut-Send. I don’t have a milling machine or laser cutter, so this is a really nice option for getting customized parts made from basically any material - aluminum, steel, acrylic - that are ready to assemble. Instead of complex mechanical devices, liquid column dampers dissipate energy through the movement of water. The liquid in the tank is both the mass and the damper. This works like a pendulum where the fluid oscillates between two columns. Normally, there’s an orifice between the two columns that creates the damping through friction loss as water flows from one side to the other. To make this demo a little simpler, I just put lids on the columns with small holes. I actually bought a fancy air valve to make this adjustable, but it didn’t allow quite enough airflow. So instead, I simplified with a piece of tape. Very technical. Energy transferred to the water through the building is dissipated by the friction of the air as it moves in and out of the columns. And you can even hear this as it happens. Any supplemental damping system starts with a design criterion. This varies around the world, but in the US, this is probability-based. We generally require that peak accelerations with a 1-in-10 chance of being exceeded in a given year be limited to 15-18 milli-gs in residential buildings and 20-25 milli-gs in offices. For reference, the lateral acceleration for highway curve design is usually capped at 100 milli-gs, so the design criteria for buildings is between a fourth and a sixth of that. I think that makes intuitive sense. You don’t want to feel like you’re navigating a highway curve while you sit at your desk at work. It’s helpful to think of these systems in a simplified way. This is the most basic representation: a spring, a damper, and mass on a cart. We know the mass of the building. We can estimate its stiffness. And the building itself has some intrinsic damping, but usually not much. If we add the damping system onto the cart, it’s basically just the same thing at a smaller scale, and the design process is really just choosing the mass and damping systems for the remaining pieces of this puzzle to achieve the design goal. The mass of liquid dampers is usually somewhere between half a percent to two percent of the building’s total weight. The damping is related to the water’s ability to dissipate energy. And the spring needs to be tuned to the building. All buildings vibrate at a natural frequency related to their height and stiffness. Think of it like a big tuning fork full of offices or condos. I can estimate my model’s natural frequency by timing the number of oscillations in a given time interval. It’s about 1.3 hertz or cycles per second. In an ideal tuned damper, the oscillation of the damping system matches that of the building. So tuning the frequency of the damper is an important piece of the puzzle. For a tuned liquid column damper, the tuning mostly comes from the length of the liquid flow path. A longer path results in a lower frequency. The compression of the air above the column in my demo affects this too, and some types of dampers actually take advantage of that phenomenon. I got the best tuning when the liquid level was about halfway up the columns. The orifice has less of an effect on frequency and is used mostly to balance the amount of damping versus the volume of liquid that flows through each cycle. In my model, with one of the holes completely closed off, you can see the water doesn’t move, and you get minimal damping. With the tape mostly covering the hole, you get the most frictional loss, but not all the fluid flows from one side to the other each cycle. When I covered about half of one hole, I got the full fluid flow and the best damping performance. The benefit of a tuned column damper is that it doesn’t take up a lot of space. And because the fluid movement is confined, they’re fairly predictable in behavior. So, these are used in quite a few skyscrapers, including the Random House Tower in Manhattan, One Wall Center in Vancouver (which actually has many walls), and Comcast Center in Philadelphia. But, tuned column liquid dampers have a few downsides. One is that they really only work for flexible structures, like my demo. Just like in a pendulum, the longer the flow path in a column damper, the lower the frequency of the oscillation. For stiffer buildings with higher natural frequencies, tuning requires a very short liquid column, which limits the mass and damping capability to a point where you don’t get much benefit. The other thing is that this is still kind of a complex device with intricate shapes and a custom orifice between the two columns. So, we can get even simpler. This is my model tuned sloshing damper, and it’s about as simple as a damper can get. I put a weight inside the empty tank to make a fair comparison, and we can put it side by side with water in the tank to see how it works. As you can see, sloshing dampers dissipate energy by… sloshing. Again, the water is both the mass and the damper. If you tune it just right, the sloshing happens perfectly out of phase of the motion of the building, reducing the magnitude of the movement and acceleration. And you can see why this might be a little cheaper to build - it’s basically just a swimming pool - four concrete walls, a floor, and some water. There’s just not that much to it. But the simplicity of construction hides the complexity of design. Like a column damper, the frequency of a sloshing damper can be tuned, first by the length of the tank. Just like fretting a guitar string further down the neck makes the note lower, a tank works the same way. As the tank gets longer, its sloshing frequency goes down. That makes sense - it takes longer for the wave to get from one side to the other. But you can also adjust the depth. Waves move slower in shallower water and faster in deeper water. Watch what happens when I overfill the tank. The initial wave starts on the left as the building goes right. It reaches the right side just as the building starts moving left. That’s what we want; it’s counteracting the motion. But then it makes it back to the left before the building starts moving right. It’s actually kind of amplifying the motion, like pushing a kid on a swing. Pretty soon after that, the wave and the building start moving in phase, so there’s pretty much no damping at all. Compare it to the more properly tuned example where most of the wave motion is counteracting the building motion as it sways back and forth. You can see in my demo that a lot of the energy dissipation comes from the breaking waves as they crash against the sides of the tank. That is a pretty complicated phenomenon to predict, and it’s highly dependent on how big the waves are. And even with the level pretty well tuned to the frequency of the building, you can see there’s a lot of complexity in the motion with multiple modes of waves, and not all of them acting against the motion of the building. So, instead of relying on breaking waves, most sloshing dampers use flow obstructions like screens, columns, or baffles. I got a few different options cut out of acrylic so we can try this out. These baffles add drag, increasing the energy dissipation with the water, usually without changing the sloshing frequency. Here’s a side-by-side comparison of the performance without a baffle and with one. You can see that the improvement is pretty dramatic. The motion is more controlled and the behavior is more linear, making this much simpler to predict during the design phase. It’s kind of the best of both worlds since you get damping from the sloshing and the drag of the water passing through the screen. Almost all the motion is stopped in this demo after only three oscillations. I was pretty impressed with this. Here’s all three of the baffle runs side by side. Actually, the one with the smallest holes worked the best in my demo, but deciding the configuration of these baffles is a big challenge in the engineering of these systems because you can’t really just test out a bunch of options at full scale. Devices like this are in service in quite a few high-rise buildings, including Princess Tower in Dubai, and the Museum Tower in Dallas. With no moving parts and very little maintenance except occasionally topping it off to keep the water at the correct level, you can see how it would be easy to choose a sloshing damper for a new high-rise project. But there are some disadvantages. One is volumetric efficiency. You can see that not all the water in the tank is mobilized, especially for smaller movements, which means not all the water is contributing to the damping. The other is non-linearity. The amount of damping changes depending on the magnitude of the movement since drag is related to velocity squared. And even the frequency of the damper isn’t constant; it can change with the wave amplitude as well because of the breaking waves. So you might get good performance at the design level, but not so much for slower winds. Dampers aren’t just used in buildings. Bridges also take advantage of these clever devices, especially on the decks of pedestrian bridges and the towers of long-span bridges. This also happens at a grand scale between the Earth and moon. Tidal bulges in the oceans created by the moon’s tug on Earth dissipate energy through friction and turbulence, which is a big part of why our planet’s rotation is slowing over time. Days used to be a lot shorter when the Earth was young, but we have a planet-scale liquid damper constantly dissipating our rotational energy. But whether it’s bridges or buildings, these dampers usually don’t work perfectly right at the start. Vibrations are complicated. They’re very hard to predict, even with modern tools like simulation software and scale physical models. So, all dampers have to go through a commissioning process. Usually this involves installing accelerometers once construction is nearing completion to measure the structure’s actual natural frequency. The tuning of tuned dampers doesn’t just happen during the design phase; you want some adjustability after construction to make sure they match the structure’s natural frequency exactly so you get the most damping possible. For liquid dampers, that means adjusting the levels in the tanks. And in many cases, buildings might use multiple dampers tuned to slightly different frequencies to improve the performance over a range of conditions. Even in these two basic categories, there is a huge amount of variability and a lot of ongoing research to minimize the tradeoffs these systems come with. The truth is that, relatively speaking, there aren’t that many of these systems in use around the world. Each one is highly customized, and even putting them into categories can get a little tricky. There are even actively controlled liquid dampers. My tuning for the column damper works best for a single magnitude of motion, but you can see that once the swaying gets smaller, the damper isn’t doing a lot to curb it. You can imagine if I constantly adjusted the size of the orifice, I could get better performance over a broader range of unwanted motion. You can do this electronically by having sensors feed into a control system that adjusts a valve position in real-time. Active systems and just the flexibility to tune a damper in general also help deal with changes over time. If a building’s use changes, if new skyscrapers nearby change the wind conditions, or if it gets retrofits that change its natural frequency, the damping system can easily accommodate those changes. In the end, a lot of engineering decisions come down to economics. In most cases, damping is less about safety and more about comfort, which is often harder to pin down. Engineers and building owners face a balancing act between the cost of supplemental damping and the value of the space those systems take up. Tuned mass dampers are kind of household names when it comes to damping. A few buildings like Shanghai Center and Taipei 101 have made them famous. They’re usually the most space-efficient (since steel and concrete are more dense than water). But they’re often more costly to install and maintain. Liquid dampers are the unsung heroes. They take up more space, but they’re simple and cost-effective, especially if the fire codes already require you to have a big tank of water at the top of your building anyway. Maybe someday, an architect will build one out of glass or acrylic, add some blue dye and mica powder, and put it on display as a public showcase. Until then, we’ll just have to know it’s there by feel.
[Note that this article is a transcript of the video embedded above.] “The big black stacks of the Illium Works of the Federal Apparatus Corporation spewed acid fumes and soot over the hundreds of men and women who were lined up before the red-brick employment office.” That’s the first line of one of my favorite short stories, written by Kurt Vonnegut in 1955. It paints a picture of a dystopian future that, thankfully, didn’t really come to be, in part because of those stacks. In some ways, air pollution is kind of a part of life. I’d love to live in a world where the systems, materials and processes that make my life possible didn’t come with any emissions, but it’s just not the case... From the time that humans discovered fire, we’ve been methodically calculating the benefits of warmth, comfort, and cooking against the disadvantages of carbon monoxide exposure and particulate matter less than 2.5 microns in diameter… Maybe not in that exact framework, but basically, since the dawn of humanity, we’ve had to deal with smoke one way or another. Since, we can’t accomplish much without putting unwanted stuff into the air, the next best thing is to manage how and where it happens to try and minimize its impact on public health. Of course, any time you have a balancing act between technical issues, the engineers get involved, not so much to help decide where to draw the line, but to develop systems that can stay below it. And that’s where the smokestack comes in. Its function probably seems obvious; you might have a chimney in your house that does a similar job. But I want to give you a peek behind the curtain into the Illium Works of the Federal Apparatus Corporation of today and show you what goes into engineering one of these stacks at a large industrial facility. I’m Grady, and this is Practical Engineering. We put a lot of bad stuff in the air, and in a lot of different ways. There are roughly 200 regulated hazardous air pollutants in the United States, many with names I can barely pronounce. In many cases, the industries that would release these contaminants are required to deal with them at the source. A wide range of control technologies are put into place to clean dangerous pollutants from the air before it’s released into the environment. One example is coal-fired power plants. Coal, in particular, releases a plethora of pollutants when combusted, so, in many countries, modern plants are required to install control systems. Catalytic reactors remove nitrous oxides. Electrostatic precipitators collect particulates. Scrubbers use lime (the mineral, not the fruit) to strip away sulfur dioxide. And I could go on. In some cases, emission control systems can represent a significant proportion of the costs involved in building and operating a plant. But these primary emission controls aren’t always feasible for every pollutant, at least not for 100 percent removal. There’s a very old saying that “the solution to pollution is dilution.” It’s not really true on a global scale. Case in point: There’s no way to dilute the concentration of carbon dioxide in the atmosphere, or rather, it’s already as dilute as it’s going to get. But, it can be true on a local scale. Many pollutants that affect human health and the environment are short-lived; they chemically react or decompose in the atmosphere over time instead of accumulating indefinitely. And, for a lot of chemicals, there are concentration thresholds below which the consequences on human health are negligible. In those cases, dilution, or really dispersion, is a sound strategy to reduce their negative impacts, and so, in some cases, that’s what we do, particularly at major point sources like factories and power plants. One of the tricks to dispersion is that many plumes are naturally buoyant. Naturally, I’m going to use my pizza oven to demonstrate this. Not all, but most pollutants we care about are a result of combustion; burning stuff up. So the plume is usually hot. We know hot air is less dense, so it naturally rises. And the hotter it is, the faster that happens. You can see when I first start the fire, there’s not much air movement. But as the fire gets hotter in the oven, the plume speeds up, ultimately rising higher into the air. That’s the whole goal: get the plume high above populated areas where the pollutants can be dispersed to a minimally-harmful concentration. It sounds like a simple solution - just run our boilers and furnaces super hot to get enough buoyancy for the combustion products to disperse. The problem with the solution is that the whole reason we combust things is usually to recover the heat. So if you’re sending a lot of that heat out of the system, just because it makes the plume disperse better, you’re losing thermodynamic efficiency. It’s wasteful. That’s where the stack comes in. Let me put mine on and show you what I mean. I took some readings with the anemometers with the stack on and off. The airspeed with the stack on was around double with it off. About a meter per second compared with two. But it’s a little tougher to understand why. It’s intuitive that as you move higher in a column of fluid, the pressure goes down (since there’s less weight of the fluid above). The deeper you dive in a pool, the more pressure you feel. The higher you fly in a plane or climb a mountain, the lower the pressure. The slope of that line is proportional to a fluid’s density. You don’t feel much of a pressure difference climbing a set of stairs because air isn’t very dense. If you travel the same distance in water, you’ll definitely notice the difference. So let’s look at two columns of fluid. One is the ambient air and the other is the air inside a stack. Since it’s hotter, the air inside the stack is less dense. Both columns start at the same pressure at the bottom, but the higher you go, the more the pressure diverges. It’s kind of like deep sea diving in reverse. In water, the deeper you go into the dense water, the greater the pressure you feel. In a stack, the higher you are in a column of hot air, the more buoyant you feel compared to the outside air. This is the genius of a smoke stack. It creates this difference in pressure between the inside and outside that drives greater airflow for a given temperature. Here’s the basic equation for a stack effect. I like to look at equations like this divided into what we can control and what we can’t. We don’t get to adjust the atmospheric pressure, the outside temperature, and this is just a constant. But you can see, with a stack, an engineer now has two knobs to turn: the temperature of the gas inside and the height of the stack. I did my best to keep the temperature constant in my pizza oven and took some airspeed readings. First with no stack. Then with the stock stack. Then with a megastack. By the way, this melted my anemometer; should have seen that coming. Thankfully, I got the measurements before it melted. My megastack nearly doubled the airspeed again at around three-and-a-half meters per second versus the two with just the stack that came with the oven. There’s something really satisfying about this stack effect to me. No moving parts or fancy machinery. Just put a longer pipe and you’ve fundamentally changed the physics of the whole situation. And it’s a really important tool in the environmental engineer’s toolbox to increase airflow upward, allowing contaminants to flow higher into the atmosphere where they can disperse. But this is not particularly revolutionary… unless you’re talking about the Industrial Revolution. When you look at all the pictures of the factories in the 19th century, those stacks weren’t there to improve air quality, if you can believe it. The increased airflow generated by a stack just created more efficient combustion for the boilers and furnaces. Any benefits to air quality in the cities were secondary. With the advent of diesel and electric motors, we could use forced drafts, reducing the need for a tall stack to increase airflow. That was kind of the decline of the forests of industrial chimneys that marked the landscape in the 19th century. But they’re obviously not all gone, because that secondary benefit of air quality turned into the primary benefit as environmental rules about air pollution became stricter. Of course, there are some practical limits that aren’t taken into account by that equation I showed. The plume cools down as it moves up the stack to the outside, so its density isn’t constant all the way up. I let my fire die down a bit so it wouldn’t melt the thermometer (learned my lesson), and then took readings inside the oven and at the top of the stack. You can see my pizza oven flue gas is around 210 degrees at the top of the mega-stack, but it’s roughly 250 inside the oven. After the success of the mega stack on my pizza oven, I tried the super-mega stack with not much improvement in airflow: about 4 meters per second. The warm air just got too cool by the time it reached the top. And I suspect that frictional drag in the longer pipe also contributed to that as well. So, really, depending on how insulating your stack is, our graph of height versus pressure actually ends up looking like this. And this can be its own engineering challenge. Maybe you’ve gotten back drafts in your fireplace at home because the fire wasn’t big or hot enough to create that large difference in pressure. You can see there are a lot of factors at play in designing these structures, but so far, all we’ve done is get the air moving faster. But that’s not the end goal. The purpose is to reduce the concentration of pollutants that we’re exposed to. So engineers also have to consider what happens to the plume once it leaves the stack, and that’s where things really get complicated. In the US, we have National Ambient Air Quality Standards that regulate six so-called “criteria” pollutants that are relatively widespread: carbon monoxide, lead, nitrogen dioxide, ozone, particulates, and sulfur dioxide. We have hard limits on all these compounds with the intention that they are met at all times, in all locations, under all conditions. Unfortunately, that’s not always the case. You can go on EPA’s website and look at the so-called “non-attainment” areas for the various pollutants. But we do strive to meet the standards through a list of measures that is too long to go into here. And that is not an easy thing to do. Not every source of pollution comes out of a big stationary smokestack where it’s easy to measure and control. Cars, buses, planes, trucks, trains, and even rockets create lots of contaminants that vary by location, season, and time of day. And there are natural processes that contribute as well. Forests and soil microbes release volatile organic compounds that can lead to ozone formation. Volcanic eruptions and wildfires release carbon monoxide and sulfur dioxide. Even dust storms put particulates in the air that can travel across continents. And hopefully you’re seeing the challenge of designing a smoke stack. The primary controls like scrubbers and precipitators get most of the pollutants out, and hopefully all of the ones that can’t be dispersed. But what’s left over and released has to avoid pushing concentrations above the standards. That design has to work within the very complicated and varying context of air chemistry and atmospheric conditions that a designer has no control over. Let me show you a demo. I have a little fog generator set up in my garage with a small fan simulating the wind. This isn’t a great example because the airflow from the fan is pretty turbulent compared to natural winds. You occasionally get some fog at the surface, but you can see my plume mainly stays above the surface, dispersing as it moves with the wind. But watch what happens when I put a building downstream. The structure changes the airflow, creating a downwash effect and pulling my plume with it. Much more frequently you see the fog at the ground level downstream. And this is just a tiny example of how complex the behavior of these plumes can be. Luckily, there’s a whole field of engineering to characterize it. There are really just two major transport processes for air pollution. Advection describes how contaminants are carried along by the wind. Diffusion describes how those contaminants spread out through turbulence. Gravity also affects air pollution, but it doesn’t have a significant effect except on heavier-than-air particulates. With some math and simplifications of those two processes, you can do a reasonable job predicting the concentration of any pollutant at any point in space as it moves and disperses through the air. Here’s the basic equation for that, and if you’ll join me for the next 2 hours, we’ll derive this and learn the meaning of each term… Actually, it might take longer than that, so let’s just look at a graphic. You can see that as the plume gets carried along by the wind, it spreads out in what’s basically a bell curve, or gaussian distribution, in the planes perpendicular to the wind direction. But even that is a bit too simplified to make any good decisions with, especially when the consequences of getting it wrong are to public health. A big reason for that is atmospheric stability. And this can make things even more complicated, but I want to explain the basics, because the effect on plumes of gas can be really dramatic. You probably know that air expands as it moves upward; there’s less pressure as you go up because there is less air above you. And as any gas expands, it cools down. So there’s this relationship between height and temperature we call the adiabatic lapse rate. It’s about 10 degrees Celsius for every kilometer up or about 28 Fahrenheit for every mile up. But the actual atmosphere doesn’t always follow this relationship. For example, rising air parcels can cool more slowly than the surrounding air. This makes them warmer and less dense, so they keep rising, promoting vertical motion in a positive feedback loop called atmospheric instability. You can even get a temperature inversion where you have cooler air below warmer air, something that can happen in the early morning when the ground is cold. And as the environmental lapse rate varies from the adiabatic lapse rate, the plumes from stacks change. In stable conditions, you usually get a coning plume, similar to what our gaussian distribution from before predicts. In unstable conditions, you get a lot of mixing, which leads to a looping plume. And things really get weird for temperature inversions because they basically act like lids for vertical movement. You can get a fanning plume that rises to a point, but then only spreads horizontally. You can also get a trapping plume, where the air gets stuck between two inversions. You can have a lofting plume, where the air is above the inversion with stable conditions below and unstable conditions above. And worst of all, you can have a fumigating plume when there are unstable conditions below an inversion, trapping and mixing the plume toward the ground surface. And if you pay attention to smokestacks, fires, and other types of emissions, you can identify these different types of plumes pretty easily. Hopefully you’re seeing now how much goes into this. Engineers have to keep track of the advection and diffusion, wind speed and direction, atmospheric stability, the effects of terrain and buildings on all those factors, plus the pre-existing concentrations of all the criteria pollutants from other sources, which vary in time and place. All that to demonstrate that your new source of air pollution is not going to push the concentrations at any place, at any time, under any conditions, beyond what the standards allow. That’s a tall order, even for someone who loves gaussian distributions. And often the answer to that tall order is an even taller smokestack. But to make sure, we use software. The EPA has developed models that can take all these factors into account to simulate, essentially, what would happen if you put a new source of pollution into the world and at what height. So why are smokestacks so tall? I hope you’ll agree with me that it turns out to be a pretty complicated question. And it’s important, right? These stacks are expensive to build and maintain. Those costs trickle down to us through the costs of the products and services we buy. They have a generally negative visual impact on the landscape. And they have a lot of other engineering challenges too, like resonance in the wind. And on the other hand, we have public health, arguably one of the most critical design criteria that can exist for an engineer. It’s really important to get this right. I think our air quality regulations do a lot to make sure we strike a good balance here. There are even rules limiting how much credit you can get for building a stack higher for greater dispersion to make sure that we’re not using excessively tall stacks in lieu of more effective, but often more expensive, emission controls and strategies. In a perfect world, none of the materials or industrial processes that we rely on would generate concentrated plumes of hazardous gases. We don’t live in that perfect world, but we are pretty fortunate that, at least in many places on Earth, air quality is something we don’t have to think too much about. And to thank for it, we have a relatively small industry of environmental professionals who do think about it, a whole lot. You know, for a lot of people, this is their whole career; what they ponder from 9-5 every day. Something most of us would rather keep out of mind, they face it head-on, developing engineering theories, professional consensus, sensible regulations, modeling software, and more - just so we can breathe easy.
More in science
Titles matter, you can change regulation, how M&A happens
[Note that this article is a transcript of the video embedded above.] Flaming Gorge Dam rises from the Green River in northern Utah like a concrete wedge driven into the canyon, anchored against the sheer rock walls that flank it. It’s quintessential, in a way. It’s what we picture when we think about dams: a hulking, but also somehow graceful, wall of concrete stretching across a narrow rocky valley. But to dam engineers, there’s nothing quintessential about it. So-called arch dams are actually pretty rare. For reference, the US has about 92,000 dams listed in the national inventory. I couldn’t find an exact number, but based on a little bit of research, I estimate that we have maybe around 50 arch dams - it’s less than a tenth of a percent. The only reason we think of arch dams as archetypal is because they’re so huge. I counted 11 in the US that have their own visitor center. There just aren’t that many works of infrastructure that double as tourist destinations, and the reason for it is, I think, kind of interesting. Because an arch dam isn’t just an engineering solution to holding back water, and it’s not just a solution to holding back a lot of water. It’s all about height, and I built a little demo to show you what I mean. I’m Grady, and this is Practical Engineering. Engineers love categories, and dams are no exception. You can group them in a lot of ways, but mostly, we care about how they handle the incredible force of water they hold back. Embankment dams do it with earth or rock, relying on friction between the individual particles that make up the structure. Gravity dams do it with weight. Let me show you an example. I have my tried and trusted acrylic flume with a small plastic dam. Once this is all set up, I can start filling up the reservoir. This little dam is a little narrower than the flume. It doesn’t touch the sides, so it leaks a bit. The reason for that will be clear in a moment. And hopefully you can see what’s about to happen. This gravity dam doesn’t have much gravity in it, so it doesn’t take much water at all before you get a failure. I’m counting failure as the first sign of movement, by the way. That’s when the stabilizing forces are overcome by the destabilizing ones. And the little dam by itself could hold until my reservoir was about a quarter of the way to the top. Gravity dams get their stability against sliding from… you guessed it… friction. Bet you thought I was going to say gravity. And actually, it kind of is gravity, since frictional resistance is a function of just two variables: the normal force (in other words, the weight of the structure) and a coefficient that depends on the two materials touching. Engineers analyze the stability of gravity dams in cross-section, essentially taking a small slice of the structure. You want every slice to be able to support itself. That’s why I didn’t want the demo touching the sides of the flume; it would add resistance that doesn’t actually exist in a cross-section. The destabilizing force is hydrostatic pressure from the reservoir, which increases with depth. And the stabilizing force is friction. There are some complexities to this that we’ll get into, but very generally, as long as you have more friction than pressure, you’re good; you have a stable structure. So let’s add some normal force to the demo and see what happens. [Beat] You can see my little reservoir gets a little higher before the dam fails, about halfway to the top. And we can try it again with more weight. But the result gets a little more interesting… the dam didn’t actually slide this time, but it still failed. Turns out gravity dams have two major failure modes: sliding and overturning. Resistance to sliding comes from friction, which really doesn’t depend on how the weight of the dam is distributed. That’s not true for overturning failures. Let’s look back at our cross-section. For a unit width of dam, the hydrostatic pressure from the reservoir looks like this. Pressure increases with depth. And the area under this line is the total force pushing the dam downstream. We can simplify that distribution and treat it like it’s a single force, and it turns out when you do that, the force acts a third of the way up the total depth of water. Most dams want to rotate about the downstream toe, so you have a destabilizing force offset from the point of rotation. In other words, you have a torque, also called a moment. The dam has to create an opposite moment around that point to remain stable. Moment or torque is calculated as the force multiplied by its perpendicular distance from the point of rotation. So, the further the center of mass is from the downstream toe, the more stable the structure is, and the demo shows it too. Here’s where we left the weights the last time, and let’s see it happen again. The reservoir makes it about two-thirds of the way up the walls before the dam overturns. Let’s make a simple shift. Just move the weights further upstream and try again. It’s not a big difference. The reservoir reaches about three-quarters the way up before we see a sliding failure, but shifting the weights did increase the stability. And this is why a lot of gravity dams have a fairly consistent shape, with most of the weight concentrated on the upstream side, and usually a sloped or stepped downstream face. Interestingly, you can use the force of water against itself in a way. Watch what happens when I turn my little model around. Now the hydrostatic pressure applies both a destabilizing and stabilizing force, so you get more resistance for a given depth. A lot of deployable temporary storm barriers and cofferdam systems take advantage of this kind of configuration. You can imagine if I extended the base even further, I could create a structure that was self-stable just from its geometry alone. The weight of the water on the footing would overcome the lateral pressure. But there’s a catch to this. This is fully stable now, but watch what happens when I give the dam just a bit of a tilt. All of a sudden, it’s no longer stable. This might seem kind of intuitive, but I think it’s important to explain what’s actually going on. Hydrostatic pressure from the reservoir doesn’t only act on the face of a dam. With smooth plastic on smooth plastic, you get a pretty nice seal, but as soon as even a tiny gap opens, water gets underneath. Now there’s upward pressure on the bottom of the dam as well. If you’re depending on the downward force of a dam from its weight for stability, it’s easy to see why an upward force is a bad thing. And it’s so dramatic in the example with the upstream footing specifically. In that case, the downward pressure of the reservoir is acting as a stabilizing force, but if water can get underneath that footing, it basically cancels out. The pressure on the bottom is the same as the pressure on the top. But this isn’t only an issue in that case. The ground isn’t waterproof. In fact, I’ve done a video all about the topic. Soil and rock works more like a sponge than a solid material, and water can flow through them. That’s how we get aquifers and wells and springs and such. But it’s a problem for gravity dams, because water can seep below the structure and apply pressure to the bottom, essentially counteracting its weight. We call it uplift. Looking back at the cross-section, we can estimate this. Of course, you have the triangular pressure distribution along the upstream face. But at this point you have the full hydrostatic pressure also pushing upward. And at the downstream toe, you have no pressure (it’s exposed to the atmosphere). So, now you have a pressure distribution below the dam that looks like this. Of course, this part can get a lot more complicated since most dams don’t sit flush with the ground, and many are equipped with drains and cutoff walls, so definitely go check that other video out if you want to learn more. But let me show you the issue this causes with some recreational math on our cross-sectional slice of the dam. The taller the dam, the greater the uplift force. That happens linearly. In other words, the force is proportional to the depth of the reservoir. But look at the lateral force. Again, remember it’s the area under this triangle. Maybe you remember that formula: one-half times base times height. Well, the height is the depth of the water. And the base is also a function of the depth. More specifically, it’s the unit weight of water times depth. Multiply it together, and you see the challenge: the force increases as a function of the depth squared. So for every unit of additional height you want out of a gravity dam, you need significantly more weight to resist the forces, which means more material and thus a lot more cost. Hopefully all this exposition is starting to reveal a solution to this rapid divergence of stability and loads as a reservoir increases in height. Dams don’t actually float in space like my demonstration and graphics show. You know, by necessity, they extend across the entire valley and usually key into the abutments on either side. Naturally, that connection at the sides is going to offer some resistance to the forces dams need to withstand. And if you can count on that resistance, you can significantly lower the mass, and thus the cost, of the structure. But, again, this gets complicated. Let’s go back to the demo. Now I’m going to replace my gravity dam with something much simpler. Just a sheet of aluminum flashing, and, to simulate that resistance provided by socketing the structure into the earth, I’ve taped it to the bottom and sides… with some difficulty, actually. When I fill up the reservoir with water, it holds just fine. There’s a little leaking past my subpar tape job, but this is a fully stable structure. And I think the comparison here is pretty stark. When you can develop resistance from the sides you can get away with a lot less dam. But it’s harder than you might think to do that. For one, the natural soil or rock at a dam site might not be all that strong. The banks of rivers aren’t generally known for their stability, so the prospect of transferring enormous amounts of force into them rarely makes a lot of engineering sense. But the other challenge is in the dam itself. Take a look back at this demo. See how my dam is bending behind the force of the water. It’s holding there, but, you know, we don’t actually build dams out of aluminum flashing. Resisting loads in this way basically treats the dam like a beam, like a sideways bridge girder. Except, unlike girder bridges that usually only span up to a few hundred feet, dams are often much longer. Even the stiffest modern materials, like prestressed concrete boxes, would just deflect too much under load to transfer all the hydrostatic pressure across a valley into the abutments. Plus we usually don’t like to rely on steel too much in dams because of issues with corrosion and longevity. So where a typical beam experiences both tensile and compressive stress on opposite sides, we really need to transfer all that load, creating only compressive stress in the material. I’m sure you see where I’m going with this. How have we been building bridges for ages from materials like masonry where tensile stress isn’t an option? It’s arches! The arch is a special shape in engineering because you can transfer loads by putting the material in compression only, allowing for simpler, cheaper, and longer-lasting materials like masonry and concrete. You basically co-opt the geology for support, reducing the need for a massive structure. For completeness’s sake, let me show you how it works in the demo. I’ve formed a little arch from my thin sheet of aluminum. Now when I fill up the reservoir, there’s no deflection like the previous example. And again, side by side, it’s easy to see the benefits here. You get a lot more efficiency out of your materials than you do with an earthen embankment dam or a gravity structure. Of course, there are some drawbacks here. For one, arches create horizontal forces at the supports called thrusts that have to be resisted. Sites that use this design really require strong, competent rock in the abutments to withstand the enormous loads. And just like with bridges, the span matters. The wider the valley, the bigger the arch needs to be, so these dams generally only make sense in deep gorges and steep, narrow canyons. The engineering is a lot more complicated, too. You can’t use a simple 2D cross-section to demonstrate stability. The structural behavior is inherently three-dimensional, which is tougher to characterize, especially when you consider unusual conditions like earthquakes and temperature effects. And since they’re lighter, arch dams don’t resist uplift forces very well, making foundation drainage systems more critical. All this means that it’s really only a solution that makes economic sense in a narrow range of circumstances, one of the most important being height. For smaller dams, the additional complexity and expense of designing and building an arch aren’t justified by the structural efficiency. Gravity and embankment dams are much more adaptable to a wider range of site conditions. And there are other types of dams, too, that blend these ideas. Multiple-arch dams use a series of smaller arches supported by buttresses, dividing the span into more manageable components. Even what is perhaps the most famous arch dam in the world - Hoover Dam - isn’t a pure arch structure. Technically, it’s a gravity-arch dam, meaning it resists part of the water load through mass while also distributing the forces into the canyon through arch action. The proportions are carefully balanced to take advantage of the unique site conditions and relatively wider canyon than most arch dams are built in. And so, when you look at the tallest dams on Earth, one structural form dominates. By my estimation, around 40 percent of the tallest 200 dams in the world incorporate an arch into their design. There aren’t that many places where it makes sense, but when you compare what it takes to hold a reservoir back in a narrow canyon valley, I think the case for arches is pretty clear.
Did you know that the number of Google searches for cat memes correlates tightly (P-value < 0.01) with England’s performance in cricket World Cups? What’s going on here? Is interest in funny cat videos driven by the excitement created by cricket victories. Perhaps cat memes are especially inspiring to English cricket players. Or more likely, […] The post It’s Just A Correlation first appeared on NeuroLogica Blog.
Israeli forces have attacked a seed bank in the West Bank city of Hebron, destroying equipment used to reproduce heirloom seeds, according to the group managing the facility. The attack comes as an Israeli blockade of the Gaza Strip has fueled widespread hunger in the enclave. Read more on E360 →
Strong new evidence suggests that primordial material from the planet’s center is somehow making its way out. Continent-size entities anchored to the core-mantle boundary might be involved. The post Earth’s Core Appears To Be Leaking Up and Out of Earth’s Surface first appeared on Quanta Magazine