More from Blog - Practical Engineering
[Note that this article is a transcript of the video embedded above.] This is the Veluwemeer (velOOwemeer) Aqueduct in Harderwijk (HAR-der-vehk), Netherlands. It solves a pretty simple problem. If you put a bridge for vehicles over a navigable waterway, you often have to make it either very high up with long approaches so the boats can pass underneath or make it moveable, which is both complicated and interrupts the flow of traffic, wet and dry. But if you put the cars below the water, both streams of traffic can flow uninterrupted with a fairly modest bridge. Elevated aqueducts aren’t that unusual, but this one is just so striking to see, I think, because it looks just like a regular highway bridge, except…the opposite. When I was a little kid, I read this book, The Hole in the Dike, about a Dutch boy who plugged a leak with his finger to save his town from a flood. And ever since then, as this little kid grew up into a civil engineer with a career working on dams and hydraulic structures, I’ve been kind of constantly exposed to this idea that the Netherlands is this magical country full of fascinating feats of civil engineering, like Willy Wonka’s chocolate factory but for infrastructure. I’m not necessarily proud to say this, but I think it’s true for a lot of people (especially here in the US) that my primary cultural touchpoint with the Netherlands is just that they’re really good at dealing with water. You know, you don’t have to browse the internet for very long to find viral (and sometimes dubious) posts about Dutch infrastructure projects. Sometimes, it feels like half of my comment section on YouTube is just people telling me that the Dutch do it better. I’m naturally skeptical of things that seem larger-than-life, especially when it comes to engineering. And without context, I think it’s hard to separate myth from facts (this TikTok video being a myth, by the way.) Here’s the actual scale of a cruise ship compared to the aqueduct. So let’s take a look at a few of these projects and find out if the Dutch really have the rest of the world outclassed when it comes to waterworks. And I’ll do my best to pronounce the Dutch words right too. Ik ben Grady, en dit is Practical Engineering. The first hint that the Dutch really do lead the world in water infrastructure is in the name of the country itself: The Netherlands translates literally to the lowlands, and that’s a pretty good description. A large portion of the country sits on the delta of three major rivers - the Rhine, the Meuse/Maas (MAHss), and the Scheldt (SHELLt) - that drain a big part of central Europe into the North Sea. Those rivers branch and meander through the delta, forming a maze of waterways, islands, inlets, and estuaries along the coast. About a quarter of the country sits below sea level, which creates a big challenge because it’s right next to the sea! As early as the Iron Age, settlers were involved in managing water. Large areas of marshland were drained with canals and ditches to convert them into land that could be used for agriculture. These plots of land, which, through human intervention, were hydrologically separated from the landscape, became known as polders. And the tradition of their engineering would continue for centuries to the present day. Unfortunately, that marshland, being full of organic material, decomposed over time. That, combined with the drainage of groundwater, caused the polders to sink and subside, increasing their vulnerability to floods. And that is kind of the heart of it. The Netherlands is a really valuable and strategic area for a lot of reasons: it’s flat; it has great access to the sea and major rivers providing for fishing and trade; it has prime conditions for farming and pastures, making it the second largest exporter of agricultural products in the world. The problem is that all those factors come with the downside of making the country extremely susceptible to floods, both from the North Sea and the major rivers that flow into it. So for basically all of its history, people were building dikes, embankments of compacted soil meant to keep water out of low-lying areas. Over the centuries, huge portions of the sinuous Dutch coastline became lined with dikes, and the individual polders were often ringed with dikes as well to keep the interior areas dry. Of course, you still get rain inside a polder, plus irrigation runoff and sometimes groundwater, so they have to be continuously pumped out. And before the widespread use of electric motors and combustion engines, the Dutch used the source of power they’re famous for: the wind. Windmills - or more accurately windpumps, since they weren’t milling anything in this context - could be used to turn paddle wheels or Archimedes screws to move water up and over dikes, keeping canals and ditches within the polders from overflowing. Over time, poldering dry-ish land, the Dutch realized they could use exactly the same technique to reclaim land from lakes. Typically land reclamation is done by using fill - soil and rock brought in from elsewhere to raise the area above the water. But it’s not the only way to do it, and it’s not that useful if you want to use that area for agriculture since the good soil is under the fill. Another option is to enclose an area below the water level, and then just get rid of the water. In this way, you can create arable land just for the cost of a dike and a pump. If you love cheese, you might be interested to learn that one of the first polders in the Netherlands reclaimed from a lake was Beemster. The soil of the ancient marsh provides a unique flavor of the famous Beemster cheese. One glaring issue with reclaiming land by drawing down the water instead of building up is that the low-lying polders are still vulnerable to floods. In 1916, a huge storm in the North Sea coincided with high flows in several rivers, flooding the Zuiderzee (ZIder-ZAY), a large, shallow bay between North Holland and Friesland (FREEZE-lahnd). The flood broke through several of the dikes, leading to catastrophic damage and casualties. Although the idea had been in discussion for years, the event provided the impetus for what would become one of the grandest hydraulic engineering projects in the world. One of the major issues with the Zuiderzee (ZIder-ZAY) flooding from a surge in the level of the North Sea is the sheer length of the coastline that has to be protected. Building adequately large and strong enough dikes to protect it all would be prohibitively expensive and just plain unrealistic. So Dutch engineers devised a deceptively simple solution: just shorten the coastline. If the effective coast of the Zuiderzee (ZIder-ZAY) could be substantially shorter, resources could go a lot further toward protecting the area against floods. So that’s just what they did. Between the late 1920s and early 1930s, a 20-mile (or 32-kilometer) dam and causeway called the Afsluitdijk (AWF-schlite-dike) was built across the Zuiderzee (ZIder-ZAY), cutting it off from the North Sea. Construction spread outward from four points, the coast on either side, and two small artificial islands built specifically for the project. The original dam was built from stones, sand, glacial till, stabilizing “mattresses” of brushwood, and thousands upon thousands of hand-laid cobblestones. Cutting off the Zuiderzee (ZIder-ZAY) from the ocean turned it into a large, and ultimately freshwater lake called the Ijsselmeer (ICE-el-meer), named for the river that empties into it. But that inflow is an engineering challenge. Without a way for it to reach the sea, the lake would just overflow. So, these sluices are like gigantic outflow valves that allow excess freshwater constantly building up in the Ijsselmeer (ICE-el-meer) to be discharged into the sea, as it would have been back when it was still the Zuiderzee (ZIder-ZAY). The sluices, which are titanic hydraulic engineering structures themselves, typically use gravity to drain water during low tide. When that passive discharge isn’t enough, new high-volume pumps can be used to make sure the level of the Ijsselmeer (ICE-el-meer) stays within the ideal range. Over the last few years, the Afsluitdijk (AWF-schlite-dike) has been undergoing a major facelift. With sea levels rising and the frequency of extreme weather events rising with it, the Dutch have completed a major overhaul, raising the crest of the dam by about 2 meters, adding thousands of huge concrete blocks to break waves and strengthen the structure. The larger blocks that are always in contact with the sea are truly gigantic, over 70,000 of them weighing six and a half metric tons EACH! The project also included upgrades to the lock complexes and sluices. And the highway that runs along the top is also getting upgrades (including, in true Dutch fashion, the bike lanes too). And human passage isn’t the only consideration for the project either. The Fish Migration River will allow fish to swim between the North Sea and the Ijsselmeer (ICE-el-meer) and river ecosystems upstream. The stark contrast between freshwater and saltwater is hazardous to fish, so the migration river spreads out the salinity gradient into something more manageable. It’s like a fish ladder, but on top of having an elevation gradient, it also is a ramp of saltiness. With the shallow Zuiderzee (ZIder-ZAY) protected from the North Sea, the Netherlands saw an opportunity to increase its food supply by creating new land. Over the middle decades of the 20th century, the Dutch built four gigantic polders in areas that were once the seafloor. These polders were built using the same principles as before, just with scaled-up 20th-century technology. There are even examples of our old friends, Archimedes screws being used, albeit with modern electric motors. Wieringermeer (veeRING-er-meer) and Noordoostpolder (NORD-OHST-polder) were built first, but the Dutch faced a problem. With such large areas of land dried up, the groundwater in adjacent areas flowed out and into the polders, causing subsidence and loss of freshwater needed for agriculture. The following polders, a pair of adjacent tracts called Eastern and Southern Flevoland (FLAYvo-lahnd), avoided this by retaining a small series of connected lakes. These bordering lakes keep the polders hydrologically isolated from the mainland, and this is also where you’ll find the Veluwemeer (velOOwemeer) aqueduct. The later three polders became Flevoland (FLAYvo-lahnd), a totally new province of dry land reclaimed from the sea. A succession of carefully selected crops were grown to rehabilitate the salty soil, making it fertile enough to farm. All you need to do to see how well it worked is look at these aerial photos of all the farmland in Flevoland (FLAYvo-lahnd)! There were plans for a fifth polder called the Markerwaard (MAHRKer-vahrd), and a huge dike was actually constructed for it. Hangups going as far back as the German Occupation of the Netherlands in the Second World War, to later environmental concerns, stopped the polder from being completed. The dike did create another freshwater reservoir, the Markermeer, and only recently, an artificial archipelago called the Marker Wadden (MAHRKer-vahdden) was built as a nature conservation project and host to migratory birds, fish, and ecotourists alike. Even as the Zuiderzee (ZIder-ZAY) Works protected parts of the Netherlands, many parts of the country were still facing threats from flooding. In the winter of 1953, an enormous storm in the North Sea raised a major storm surge, crashing into the delta, causing floodwaters to overwhelm much of the already existing and extensive flood control structures of the Netherlands. A staggering 9% of all of the farmland in the whole country was flooded, 187,000 farm animals drowned, nearly 50,000 buildings were damaged or destroyed, and over 1,800 people perished. It was one of the worst disasters in the history of the country. Just as with the Zuiderzee (ZIder-ZAY), the extraordinary length of the coastline of this area meant that adequately strengthening all the dikes in response to the storm wasn’t feasible. So, an incredibly intricate plan called the Deltawerken or Delta Works was put into motion to effectively shorten the coastline with a series of 14 major engineering projects, including dams, dikes, locks, sluices, and more. Unlike with the Zuiderzee (ZIder-ZAY) Works, fully enclosing the area and cutting off the sea wasn’t an option. Firstly, the Rhine and Meuse/Maas (MAHss) have gigantic flows. The Rhine is one of the largest rivers in Europe, and that can’t just be walled off. There are also concerns about environmental impacts and ensuring the easy movement of the huge amount of shipping that uses this waterway. So, many of these structures have to be functionally non-existent until they’re needed. The resulting projects, along with the Zuiderzee (ZIder-ZAY) works, have shortened the Dutch coast by more than half since the 19th century. These feats are so impressive they are on the American Society of Civil Engineering’s list of wonders of the modern world. And it’s easy to see why when you take a look. This is the Oosterscheldekering (OH-ster-SHELL-de-keering), the largest of all the Delta Works. It was initially designed to be a closed dam, similar in some ways to the Afsluitdijk (AWF-schlite-dike). If constructed as initially conceived, it would create another large freshwater lake. But, by the time it was under construction in the 1970s, environmental impacts were much more appreciated than they were in the 20s and 30s. So the dam was designed to include huge sluice gates to allow massive tidal flows during normal conditions while retaining the ability to fully close off the inland portion of the Delta from the sea during storm conditions. The Oosterscheldekering (OH-ster-SHELL-de-keering) comprises two artificial islands and three storm surge barrier dams connecting them. The larger of the islands also contains a lock, allowing for ships to pass through. The floodgates are staggering in scale; there are 62 steel doors, each 138 feet (or 42 meters) wide and weighing up to 480 metric tons! Even the piers between them were a monumental effort. They were built offsite, maneuvered into place with custom-built ships, then filled with sand and rock to sink them into place. Special ships also had to compact the seabed with vibration before placing the pillars. Another notable structure in the Delta Works is the Stormvloedkering Hollandse IJssel (storm-FLODE-keering--hoLAHNDse-ICE-el), a storm surge barrier protecting Europe’s largest seaport. The project has it all: a lock to allow for the passage of ships, a bridge for road traffic with a fixed truss and a moveable bascule portion crossing the lock, and two gigantic, moveable storm surge barriers crossing the main sluice. Each of these barriers is strengthened by a truss arch which makes them look like sideways bridges when viewed from above. And then, there’s the Maeslantkering. This is probably the most impressive storm surge barrier on the planet. Those tiktoks showing out-of-scale cruise ships crossing Veluwemeer (velOOwemeer) should have just shown actual gigantic ships cruising through the huge ship canal safeguarded by the Maeslantkering. It’s hard to communicate the scale of the two gates; they’re considered one of the largest moving structures on earth. And moving them is a process. The gates normally sit in dry docks. When it’s time to close them, the dry docks are flooded, and the hollow gates float in place. Then they’re pivoted around gigantic ball-and-socket joints at the ends of the truss arm. Each door is 690 feet (or 210 meters) wide, and once in place, they are flooded with water, so they sink to the bottom, completely blocking even the fiercest storm surge. In the event that the doors remain closed long enough for the flow of the Rhine to build up dangerously high on the inland side, they can be partially floated, allowing for excess river water to run out to sea. Since its completion in 1997, aside from annual testing, the Maeslantkering (mahs-LAHNT-keering) has only been closed twice: once in 2007 and again in 2023. And to me, that tells the story of Dutch waterworks more than anything else. It’s all a huge exercise in cost-benefit analysis. Look at two alternate realities: one where the Delta Works weren’t built and one where they were. And then just compare the costs. In one case, the costs are human lives, property damage, agriculture losses from saltwater, and all the disaster relief efforts associated with, so far at least, just two big storms. And in the other case, the costs are associated with designing, building, and maintaining an infrastructure program that rivals anything else on the globe. The question is simple: which one costs more? Look at many other places in the world, and the answer would probably be the Delta Works. Just the capital cost was around $13 billion dollars, and that doesn’t include the operation and maintenance, or environmental impacts of such massive projects. But in the Netherlands, where a quarter of the country sits below sea level, it’s a fraction of the cost of inaction. In the United States, most flood control projects are designed to protect up to the 1-in-100 probability storm. In other words, in a given year, there’s a 99% chance that a storm of that magnitude doesn’t happen. In the Netherlands, those levels of protection are much higher. River structures go from the 1-in-250 all the way to 1-in-1,250 and flood protection from the North Sea goes up to 1 in 10,000-year event. It only makes sense because practically the entire country is a floodplain; massive investment in protection from flooding is the only way to exist. And those projects come with other costs too. The Zuiderzee (ZIder-ZAY) Works cost the entire area’s fishing industry their livelihoods, and some consider converting such a large estuary into a freshwater lake one of the country’s greatest ecological disasters. So there are no easy answers, and the Netherland's battle against the sea will never really be over. Major waterworks are just the reality of the country, and they keep evolving their methods. One example is the Room for Rivers program which is restoring the natural floodplain along rivers in the delta. Another is the sand engine, an innovative beach nourishment project that relies on natural shoreline processes to distribute sand along the coast. The Dutch government expects the North Sea to rise 1 to 2 meters (or 3 to 7 feet) by the end of this century, meaning they’ll have to spend upwards of 150 billion dollars just to maintain the current level of protection. That sounds like a staggering cost, and it is, but consider this: that investment in protection for a major part of the country over three-quarters of a century is approximately equal to the economic impact of Hurricane Katrina, a single storm event in the US. Of course, the damage during Katrina was amplified by engineering errors, and we’re far from comparing apples-to-apples, but I think it’s helpful to look at the scale of things. Decisions of this magnitude are difficult to make, and even harder to execute, because we can’t visit those alternate realities to see how they play out. But what we can do is look at the past to see how decisions have played out historically, and there’s no place on Earth with a longer history of major public water projects than the Netherlands. In fact, the US Army Corps of Engineers and the Dutch government agency in charge of water, the Rijkswaterstaat (rikes-VAHter-stat), have had a memorandum of agreement since 2004 to share technical information and resources about water control projects. And in the aftermath of Hurricane Katrina, the Army Corps consulted with the Rijkswaterstaat (rikes-VAHter-stat) to help decide how to rebuild New Orleans’s flood defense system. In 2021, those systems were put to the test when the region was pummeled by Hurricane Ida. It was an extremely powerful storm, and the torrential rains and violent winds did enormous damage. But the storm surge was repelled by the levees, barriers, and floodgates built with the assistance of Dutch waterworks engineers. Many signs point to storms getting stronger and surges getting higher, which means that practically the whole world is in an uphill battle with floods. So we all benefit from that relatively small country with its low-lying delta lands, buttressed against the sea, and the expertise and knowledge gained by Dutch engineers through the centuries.
[Note that this article is a transcript of the video embedded above.] I am on location in downtown San Antonio, Texas, where crews have just finished setting up this massive 650-ton crane. The counterweights are on. The outriggers are down. And the jib, an extension for the crane's telescoping boom, is being rigged up. This is the famous San Antonio River Walk, a city park below street level that winds around the downtown district. It’s one of the biggest tourist attractions in the state, connecting shops, restaurants, theaters, and Spanish missions (the most famous of them being the Alamo). Every year, millions of people come to see the sights, learn some history, and maybe even take a tour boat on the water. It’s easy to enjoy the scenery without considering how it all works. But, how many rivers do you know that stay at an ideal, constant level, just below the banks year-round? One of the critical structures that make it all possible is due for some new gates, and it’s going to be a pretty interesting challenge to replace them without draining the whole river in the process. I’ve partnered up with the City of San Antonio and the San Antonio River Authority to document the entire process so you can see behind the scenes of one of my favorite places. I’m Grady, and this is Practical Engineering. After a catastrophic flood in 1921 took more than 50 lives in San Antonio, the city took drastic measures to try and protect the downtown area from future storms. Back when my first book came out, I took a little tour of some of those measures, one historical - Olmos Dam - and one more modern - the flood diversion tunnel that runs below the city. But another of those projects eventually turned into one of San Antonio’s crown jewels. A major bend in the river, right in the heart of downtown, was cut off, creating a more direct path for floodwaters to drain out. But rather than fill in the old meander, the city decided to keep it, recognizing its value as a park. Gates were installed at both connections, allowing the bend to be isolated from the rest of the river. Later a dam was built downstream on the San Antonio River with two floodgates. During normal flows, these gates control the level upstream on the river, maintaining a constant elevation for the Great Bend and the cutoff. If a flood comes, these gates can be shut to maintain a constant level in the bend, and these gates can be opened to let the floodwaters pass downstream. Essentially, this pair of floodgates are pivotal parts of the San Antonio River Walk. They hold back flow during sunny weather to keep water levels up, and they lower to release water during storms to keep downtown from being flooded. They were installed way back in 1983 and already planned for replacement. Then this happened. One of the floodgates’ gearboxes had a nut with threads that had worn down, and eventually stripped out. It caused one side of the gate to drop, damaging several components and rendering the floodgate inoperable. The City of San Antonio immediately installed stop logs upstream of the gate to block the flow and prevent the water level in the River Walk from dropping. But the gate is still unable to lower in the event of a flood, halving the capacity of this important dam. So they sprung into action to design replacements for these old gates. It’s been a long road finding a modern solution that fits within this existing structure. But it’s finally time to remove the old gates and bring this dam into the 21st century. There’s a lot of work to do before the broken gate can come out. The first job is just to get the water out. This dam has a place for stoplogs, both upstream and downstream of each gate. Historically, they’d be wood, hence the name, but modern stoplogs are heavy steel beams that stack together to create a relatively watertight bulkhead on either side. Those stoplogs have been installed since the gate went out of service, and while they hold back a whole lot, they aren’t completely watertight. Inevitably, some water gets through to fill up the area between them, making it challenging to work in this area. The contractor has brought in a large diesel pump and perched it on the bank next to the broken gate. They get it running, and it’s not long at all before the area between the upstream and downstream stoplogs is dry enough to work. The first thing to go is the drive shaft between the two gate operator gearboxes. When these gates are functioning, this shaft delivers power to the opposite side of the gate and keeps both sides raising or lowering at the same rate. But now it’s just in the way and needs to come out. It is disconnected, and the crane lowers it to the ground. The next piece is the support beam between the two operators. Same as before: it is detached by the crew, rigged to the crane, and lifted away from the dam. It’s flown across the site to the staging area and set down. All this equipment will eventually be hauled away and recycled for scrap. It might be obvious, but even though it’s broken, this gate is still attached to the rest of the dam, at the bottom with hinges, and at the top, with the two stems that would raise and lower the leaf when it was working. Before the crew can detach the gate, it will need some additional support. The crane lowers its hook. And the crew wraps two massive chain slings around it. Then the crane cables up to provide support for the gate while it gets detached. It’s not easy doing big projects like this in the downtown core of a major city. The River Authority has had to lease the parking lot next door for a place to put the crane and other equipment. There are strict rules about when they can work to make sure the project doesn’t cause too much disturbance to all the neighbors. And, this is part of the River Walk, which means it's a heavily trafficked pedestrian route. The contractor has to set up barricades during work hours and then take them down at the end of each day. They also have safety spotters who make sure there are no wayward pedestrians or workers within the swing of the crane during heavy lifts. If you’ve worked on a device or turned a wrench, you’ve probably been faced with a stuck bolt before. But what do you do if the bolt is as big around as your arm? Pretty much the same thing you’d do at a smaller scale. Apply some penetrating oil… Beat it with a hammer… Use a cheater bar on the wrench… Bring out a hydraulic press… And then you just decide to cut the whole thing off. This gate’s being scrapped anyway so there’s no use treating it with kid gloves. The crew gets out the oxyacetylene torch to cut the ears off the top. First one. And then the other. Next come the hinge pins that connect the gate at the bottom. A few come out pretty easy. A few take a little extra effort. With a chain hoist pulling, the hydraulic toe jack pushing, and a little percussive persuasion, this crew eventually gets them all out. Just cutting and hammering and pushing and pulling all the connections this gate has to the dam is an entire day’s work. These are big, heavy items in awkward positions, so each time they move, disconnect, or lift something out of the work area, they have to do it thoughtfully and carefully to ensure it's done safely. By the end of the day, the gate is finally free, but the crew decides to set it down and wait until tomorrow for the critical operation of lifting it out. The next morning, it’s time for the big lift. The chain slings are re-secured around the gate, and the crane reaches over the trees and river to slowly remove it from the dam. It’s a big moment, so the whole crew gathers around to watch. Safety spotters coordinate with the crane operator to pull the gate free from the dam, then hoist it up and over. Safety personnel are making sure no one wanders into the area, but just in case, a horn sounds when the load is over the sidewalk. Eventually, the gate makes it to the staging area in the parking lot - on dry ground for the first time in 40 years. It did its job admirably, it was a great gate, but it’s easy to see from its condition that it was definitely time for retirement. With the gate out, a boom lift is lowered into the area to help remove some of the remaining pieces. Most of the day is spent cutting and removing pieces of the gate and attachment hardware. At this point, the area will mostly sit idle while the new gate is being fabricated. But there’s more work to do in the meantime. Another part of this project is the nearby pump room. The flows in the San Antonio River often drop to a mere trickle, and this is something the city designed for when these gates were installed back in the 80s. With these gates keeping the water up at a constant level, the River Walk works kind of like a bathtub; it takes a big volume of water to fill up the channel that snakes around downtown. But, if water leaves the River Walk faster than it can be replenished, that level will drop, kind of like trying to fill a bathtub without stopping up the drain. So this dam was designed with a pump to lift water from downstream into the channel above if needed. This is a screw pump, one of the oldest and simplest hydraulic machines, sometimes called an Archimedes Screw. A motor turns a steel cylinder with a screw inside. As the screw rotates, water is lifted upwards until it spills out at the top. In this case, it falls into a flume that flows out to the river above the dam. It’s ingenious in its simplicity, and apparently worked great when it was first installed. But, not long afterward, San Antonio built its landmark flood control tunnel that allows floodwaters to bypass downtown. It’s an incredible project of its own, and it included the means to recirculate water in the San Antonio River from downstream to up. That keeps the river flowing during dry times, maintaining the level in the River Walk downtown, and rendering the old screw pump obsolete. So it never got turned on again and has been sitting here unused for many years. This new project is going to repurpose the area to create a bypass for the two gates. It will add a bit more capacity, but more importantly, it will help create some circulation in the stagnant area downstream of the dam. Still water allows sediment to build up, collects debris, and grows algae and mosquitoes. With the screw pump not running, this area just doesn’t quite see enough water movement, so the bypass will allow it to be easily flushed out when needed. But first, the screw pump has to come out. This is the same story as the gate: oxyacetylene torches and hammers. Piece by piece, the pump is cut away and hauled off as scrap. With the pump out, the room gets some modifications. Some concrete is taken out… And new concrete is installed to create a chute for the water. And then it gets its own new gate to control the flow. Luckily this small pump room has an overhead crane, because getting this gate into place was a tight fit. Back outside, crews start working on the retrofits to the dam to get ready for the new gate. Unlike the electric motors used for the old gates, the new ones will use hydraulics. These piers that flank the gates have to be modified to fit the new system. The tops of the piers get some careful demolition to accommodate the hydraulic cylinders. And the hinges from the old gate still need to be removed. This area will also have some concrete modifications so the new gate fits perfectly in the old slot. Nearly a year after the old gate was cut out, the new gate finally arrives on site. It sounds like a long time, but this project was specifically scheduled around the fabrication of these gates. They aren’t just parts you can pick up at the local hardware store. A lot of design, construction, testing, and finishing touches went into each one. And they’re so big, they have to be delivered in two parts. Today’s job is to connect them into a single gate. The halves get a layer of sealant to prevent leaks, and then a whole bunch of bolts to attach them together. And finally, this gate is ready to install. You know I love crane day. And it’s even better when there’s a small crane to assemble the big crane. This 650-ton capacity monster is configured with a luffing jib to reach out over the trees and water. But the first step is to get the gate off the stands. It has to be lifted horizontally from these saw horses, but it will be installed vertically. So the gate is rigged for the first lift, moved to the ground, and then rerigged for the main event. I’m a sucker for heavy lifts so this was a pretty fun thing to see in person. It’s incredible how much work and setup went into a milestone that only took less than an hour to complete. It’s the civil engineering equivalent of a rocket launch. The crane swings the gate up and over the trees and down to the dam. As it gets closer, the movements are slower and more deliberate. Each time the crane moves, the crew waits for the massive gate to stabilize before calling for the next step. They carefully move it into position, and when everything is lined up just right, it sits down on the base plates, ready to be connected. While it’s held by the crane, the crews begin installing the bolts that attach the gate to the concrete. This is allowed by safety regulations, but only under a set of rigid guidelines, so safety is at the top of everyone’s mind. A detailed lift plan, a pre-work safety briefing, and several spotters make sure that there are no wrong moves. These bolts are torqued to the specifications one by one, on both the upstream and downstream side of the gate. And once it’s firmly attached, the crane lowers it to the ground. The next day, the beam across the top of the piers and the hydraulic cylinders are flown into place. These cylinders will lift and lower the gate, working against the immense water pressure pushing on the upstream face. They’ll attach to these beefy hinge points on the side of each gate. The cylinders are attached to a new hydraulic power unit installed in the pump room. This unit has the valves, pressure regulators, pump, and oil reservoir to make these gates operate more efficiently and reliably than the old electric motors did. Everything is operated from the City’s tower that overlooks the dam. From here, operators can control all the city’s flood infrastructure, including the dams and gates on the river and the flood bypass tunnels that run below ground. And I have to say, it’s a pretty nice view from the top. And in fact, some of the timelapse clips I’ve shown are from a camera mounted on top of this structure. This is run by the US Geological Survey, and I’ll put a link below where you can go check out the dam in real-time. Once everything is hooked up, it’s time to test this gate out. Unfortunately, you can’t schedule a flood. Since there are just ordinary flows at the moment, the crews have to be careful not to drain the entire River Walk while they do it. The gate gets lowered just a bit to make sure nothing is binding and that the hydraulic system is working. Of course, it’s a big day to see it all working for the first time, so everyone involved in the project is on-site to see it happen. And the test went flawlessly. But it’s not the end of the project. These stop logs were installed in early 2021, and it’s finally time to pull them out nearly four years later. You can see they grew some nice foliage during their service. This process requires a professional diver to rig each one for the crane. It’s just one of the many steps made much more complicated because this structure still has to serve its purpose during the entirety of the project, and more importantly, the River Walk can’t be drained. The stop logs get lifted out of the slots. Then they’re moved directly next door to get ready for the next gate. I didn’t document as much of the second gate, because it was pretty much identical to the first one, although it went a lot faster since the gate was already ready. The area was pumped out, the old gate removed, and the new one lifted into place. And pretty soon this old dam had two new gates, plus a bypass, ready to serve the city for the next several decades. If you visited the River Walk during construction, you wouldn’t have even known it was happening, and that was the entire goal of the project: revitalize a critical part of the city’s flood control infrastructure without causing any negative impacts on one of its crown jewels. And being on site to see it happen in real time was a lot of fun. I have to give a huge thanks to the City of San Antonio, the San Antonio River Authority, the engineer, Freese and Nichols, the general contractor, Guido, and all their subcontractors for inviting me to be a part of this project and document it for you. It was a pretty incredible experience, and I hope it gives you some new appreciation for all the thought, care, and engineering that goes into making our cities run.
[Note that this article is a transcript of the video embedded above.] This is the Wallis Annenberg Wildlife Crossing under construction over the 101 just outside Los Angeles, California. When it’s finished in a few years, it will be the largest wildlife crossing (*of its kind) on the planet. The bridge is 210 feet (64 meters) long and 174 feet (53 meters) wide, roughly the same breadth as the ten-lane superhighway it crosses. Needless to say, a crossing like this isn’t cheap. The project is estimated to cost about $92 million dollars; it’s a major infrastructure project on par with similar investments in highway work. And it’s not the only example. The Federal Highway Administration recently set aside $350 million federal dollar to fund projects like this. The reasons we’re willing to invest so much into wildlife crossings aren’t as obvious as you might think, and there are some really interesting technical challenges when you’re designing infrastructure for animals. I’m Grady, and this is Practical Engineering. Roads fundamentally change the environments they cross through. And while on its face, it might seem that it’s always a disaster for wildlife, there are actually some winners amongst the losers. For vultures, crows, coyotes, raccoons, insects, and other decomposers, roads provide a buffet for nature’s scavengers. And they sometimes make for pretty good housing too, at least if you’re a swallow or a bat. In fact, cliff swallows are now so famous for nesting on the underside of highway overpasses that they’re often referred to as bridge swallows. The sides of highways have clear zones kept free from trees and similar obstacles for vehicle safety, but the lack of shade allows tender greens to thrive, creating a salad bar for species from monarch butterfly caterpillars to white-tailed deer. Of course, especially in the case of deer, this can attract animals into spending time eating dinner in danger. And the truth is that roads mostly range from a mild inconvenience to totally catastrophic for wildlife. In the battle between the two, wildlife usually loses, and in more ways than just getting squished. The ecological impacts of roads extend beyond the guardrails. Habitat loss and fragmentation, noise pollution, runoff, and of course, injecting humans into otherwise wild places are all elements of the environmental challenges caused by roads. It’s actually a pretty complicated subject, and there are even road ecologists whose entire careers are dedicated to the problem. And it’s not just wildlife that’s affected. According to the Federal Highway Administration, there are over 1,000,000 wildlife-vehicle collisions annually on US roadways. That results in tens of thousands of injuries, about 200 human fatalities, and over 8 billion dollars of damages per year. Even if you haven’t personally been involved in a collision like this, there’s a good chance that you know somebody who has. Along with the astronomical numbers reported by the FHA, it’s likely that a huge portion of wildlife collisions go unreported. There are lots cases that just don’t get counted, like if an animal is too small to notice, or if it survives the impact and escapes, or is collected by somebody practicing the dubious art of roadkill cuisine (yes, that’s a real thing and there are multiple cookbooks out there for it). There’s a wide range of consequences from animal collisions, from minor vehicle damage to human fatalities. When you average them out, researchers estimate that in 2021, the average cost of hitting a deer was $9,100. Of course, the bigger the animal, the bigger the economic loss. For a moose, that number is over $40,000 per collision. Regardless of how you might feel about environmental issues and wildlife, the economic impacts alone can justify the sometimes enormous costs required to let them safely cross our roadways. Luckily for the animal and human populations alike, there’s been increasing interest in reducing the negative impacts roads have on wildlife over the past few decades. I’m no stranger to infrastructure built for animals. It is fairly unusual for fish to get hit by cars, but they have their own manmade barriers to overcome, and I released a series of videos on fish passage facilities for dams you can check out after this if you want to learn more. Like aquatic species, there is a lot of engineering involved in getting terrestrial animals across a barrier. But fortunately, a lot of that research and guidance has been summarized in a detailed manual. I may not be a road ecologist, but I am an engineer, and I love a good Federal Highway Administration handbook! One of the most important decisions about building a wildlife crossing is where to put one. You might imagine that the busiest roads are where most of the collisions occur. And it’s true up to a point. As the number of cars on a road increases, the percentage of wildlife crossing attempts that end in a safe critter on the other side drops, and the fraction that are killed grows. But, if we keep increasing the daily traffic numbers, something unexpected happens: the number of “killed” animals declines! Eagle-eyed viewers may realize that so far, this graph is incomplete; these percentages don’t add up to 100%. That’s because there’s a third category: “repelled” animals. As highway traffic increases, you reach a point where the vehicles form a kind of moving fence, and all but the most brazen bucks will turn away. Road ecologists sometimes struggle to drum up support for wildlife crossings at high-traffic freeways (like the Annenberg crossing in LA) because of this effect. For some people, if they don’t see actual road kills on the shoulder, they struggle to accept the greater impact on wildlife populations. Habitat fragmentation caused by roads can be difficult for any species, but it’s especially hard-hitting for migratory species who HAVE to cross in order to survive and reproduce. For example, following the opening of I-84 in Idaho, biologists recorded the starvation of hundreds of mule deer mired in the snow, unable to cross to food sources. And it’s not quite as simple as the graph makes it seem. A study by Sandra Jacobsen breaks down animals into four categories of crossing style. Some animals, like frogs, are non-responders who cross roads as if they aren’t there at all. Their wild instincts compel these animals to cross without regard for their own safety, and they’re often too small for most motorists to notice. Next, you have the pausers, like turtles. These creatures, when spooked on the road or elsewhere, instinctively hunker down and stay put. While the shell of a box turtle might be impenetrable to a curious coyote, it is, sadly, no match for a box truck. Then you’ve got avoiders. This group often includes the most intelligent members of the local fauna. Grizzly bears, cougars, and other carnivores often fall into this category. For them, even low-traffic rural backroads can cause significant issues with habitat fragmentation, leading to poor genetic diversity. The small gene pool of a number of southern California cougars is one of the major drivers of the construction of the Annenberg bridge. Deer fall into the last category, speeders. As the name implies, these are fast, alert animals who, given the chance, will burst across a road to get to the other side. But even these categories have their exceptions. The poster-cat of the US-101 project, a cougar called P-22, famously crossed the 10-lane highway and took up residence in the shadow of the Hollywood sign. There just is no one-size-fits-all approach for getting animals across roads. Engineers and ecologists use a wide variety of mapping, including aerial photography, land cover, topography, habitat, plus ecological field data and even roadkill statistics to choose the most appropriate locations for new wildlife crossings. And in many cases, what works for one species may be completely ineffective for another. So most designs are made for a so-called “focal species,” with the hope that it works well for others too. But before you have a crossing, you have to get the animals to it. In most cases, that means fences, and even that is complicated. Do the focal species have a habit of digging under fences like badgers or bears? Well, then you’ll want to bury a few feet of fence to maintain its integrity. And where do they start and stop? Ideally, fences will terminate in areas that are intentionally hard to cross so animals don’t end up in a concentrated path across roadways. Sometimes boulders will be placed at the end of a wildlife fence to make it less likely that animals will choose to wander on the wrong side. But, inevitably, it happens. You don’t want to trap animals on the highway side of a fence, so many feature ramps or “jumpouts” that act almost like one-way valves for animals. There are even hinged doors for moderate-sized animals that allow wayward creatures to escape through fences. Once you’ve got a site selected, the next big choice is over or under. It turns out that going under a road is often the easiest option. In fact, in many cases, existing bridges and viaducts can naturally create opportunities for wildlife to get across our roadways. Sometimes it’s as simple as building fencing to funnel animals into existing underpasses. Another option for small animals is to use culverts as crossings. The engineering and materials for culverts are pretty well established since they’re used so much for getting drainage across roadways, so it’s not a big leap to do it with animals too. But it can be tricky getting them to use it. Since amphibians are also pretty lousy at walking long distances, it’s common to have many small tunnels installed near one another with special fencing to maximize survival. In some cases, they’re combined with buried collection buckets. During peak migration periods, the buckets are checked, and collected amphibians are manually transported across the road! Larger animals won’t fit in a culvert (or a bucket), but there are some special considerations to getting them to travel beneath a highway bridge. Many animals are hesitant about dark areas during the daytime, so it's important to get as much natural light in as possible. Lighting also affects the vegetation that grows under a bridge. More light means more natural-feeling areas, which means more animals will be willing to cross under. And of course, keeping people out is important too. Disturbance from the public can really affect animals' willingness to incorporate a new, unusual route into their routine. Many crossings are designed with cover objects like logs, rocks, and brush that can help encourage a wider variety of wildlife to take advantage of the intended path. But, for some species, underpasses just don’t work at all. You can’t FORCE a moose to do anything really, especially something like walking through a tunnel it doesn’t trust. In certain instances, the only effective way to allow safe passage across a road is over the top. For some particular focal species, an overpass might not need to be that grand. Canopy bridges just connect trees on either side of a road so primates and other tree-living creatures can get across. In Longview, Washington, there’s even a series of tiny bridges for squirrels, like the famous “Nutty Narrows” bridge. Of course, the most impressive, usually the most effective, and often the most expensive wildlife crossings are designed as overpass bridges. Examples include the famous ecoducts of the Netherlands, overpasses of the Canadian Rockies in Banff National Park, and American structures like the Wallis Annenberg Wildlife Crossing. I actually have one of these nearby. Opened in 2020, the Robert LB Tobin Land Bridge crosses the six-lane Wurzbach Parkway in San Antonio, Texas. These are full-on bridges designed specifically for the use of animals. Structures like these have all the same design issues as regular bridges for humans, plus their own engineering challenges as well. They have to hold up their own weight with a significant margin of safety, be designed to weather the elements for decades, and be inspected just like other bridges. They ALSO have to be engineered to be covered in thick layers of soil and vegetation (sometimes including trees), and be sized appropriately to accommodate focal species that might travel in huge herds or be wary of tight spaces. They have to be built to provide appropriate lines of sight for nervous crossers and often have walls that shield wildlife from the noise and light of the traffic below. One fun upside is that, at least in mountainous areas, the approaches can be a lot steeper than you might use for a vehicular bridge. An elk is pretty well suited for off-roading after all. As for the design of the bridges themselves, they’re built a lot like highway bridges, usually beam bridges or arches, just with dirt instead of concrete for the deck. While the distance across a highway is long for a wandering moose, it’s not generally enough to require a structure of more heroic engineering like cable-stayed or suspension bridges. Unlike vehicular bridges, the approaches often flare out when viewed from above, making it easier for animals to locate the bridge and for better sight lines across it. This, plus the fact that they are usually covered in native vegetation, means that wildlife overpasses are among the most striking bridges you can see. It also means that from the perspective of the wildlife crossing them, these bridges can blend into the scenery. Ideally, a herd of pronghorn wouldn’t even realize they’re on a bridge at all. It’s hard to think of any humanmade structures that have transformed the landscape more than modern roadways. They have an enormous impact on so many aspects of our lives, and it's easy to forget the impact they have on everything else that we share the landscape with. Sometimes when it comes to mitigating the negative impacts of roads on wildlife, the best thing is to just be more careful about where or IF we build a road at all. But for many of the roads we already have and the ones we might build in the future, it just makes sense - for safety, the economic benefits, and just being good stewards of the earth - to make sure that our engineering lets animals get around as easily as we can.
[Note that this article is a transcript of the video embedded above.] A lot of engineering focuses on structural members. How wide is this beam? How tall is this column? But some of the most important engineering decisions are in how to connect those members together. Take a column, for example. You can’t just set it directly on a foundation, at least not if you want it to stay up. It needs a way to physically attach to the foundation. This may seem self-evident, maybe even completely obvious to most. But in that humble connection that’s so ubiquitous you rarely even notice it, there is so much complexity. Baseplates are the structural shoreline of the built environment: where superstructure meets substructure. And even understanding just a little bit of the engineering behind them can tell you a lot of interesting things about the structures you see in your everyday life. I’m Grady, and this is Practical Engineering. Let me start us out with a little demonstration. If you’re a regular viewer, you know how much you can learn from our old friends: some concrete and a benchtop hydraulic press. I cast two cylinders of concrete about a week ago, and now it’s time to break them for science. These were cast from the exact same batch of concrete at the exact same time. For this first one, I’m pushing with a fairly narrow tool. I slowly ramp up the force until eventually… it breaks. I had a load cell below the cylinder, so we can see the force required to break this concrete. This scale isn’t calibrated, so let’s say it broke at 1400 arbitrary Practical Engineering units of force. Practicanewtons? KiloGradys? What would you call them? Now let’s do the same thing with a wider tool. At that same loading, this concrete cylinder is holding steady. In fact, it didn’t break until 3100 units. Here’s a trick question. Was the second cylinder stronger than the first one? Hopefully it’s obvious that the answer is no. Most materials don’t care about force. I mean, in the strictest sense, most materials don’t care about anything. But what I mean is that the performance of a material against a loading condition usually depends not on the total force, but how that force is distributed over an area. It’s pressure; force divided by area. Increase the area, lower the pressure. And pressure is what breaks stuff. So that’s what a lot of baseplates do. They transfer the vertical forces of a column to the foundation over a larger area, reducing the pressure to a level that the concrete can withstand. And that’s the first engineering decision when designing a baseplate. How big does it need to be? If you know the force in the column and the allowable pressure on the foundation, you can just divide them to get the minimum area of the plate. That’s the easy part. Because steel isn’t infinitely stiff. If I put this column on a sheet of paper, I think it’s clear that there’s no real load distribution happening here. The outside edges of the paper aren’t applying any of the column’s force into the table; I can just lift them. But this can be true for steel too. I filled up an acrylic box with layers of sand to make this clearer. If I use a thin base plate, the forces from my column don’t distribute evenly into the foundation. You can see that the baseplate flexes and the sand directly below the column displaces a lot more. I can try this with a thicker, more rigid baseplate, and the results are a lot different. Much more even distribution of pressure. So the second engineering decision when designing a baseplate is the stiffness of the plate, usually determined by the thickness of the steel, based on the loads you expect and how far the plate extends beyond the edges of the column. And in heavy-duty applications like steel bridge supports, vertical stiffeners can be included to make the connection even more rigid. So far, though, the baseplate isn’t really much of a connection. That’s the thing about compressive loads: gravity holds them together automatically. There are no bolts in the Great Pyramid of Giza. The blocks just sit on top of each other. And that could be true for some columns too. The main load they see is axial, along their length, pressing the plate to the ground. But we know there are other loading conditions too. A perfect example is a sign. Billboards and highway signs are essentially gigantic wind sails. They don’t actually weigh all that much, so the compressive force on their base isn’t a lot, but the horizontal forces from the wind can be significantly higher than that. Those horizontal forces can increase the compression force on one side of the base plate, so you have to account for that in the design. But they also can result in shear and tension forces between the baseplate and foundation, so you’ve got to have something in place to resist those forces too. That’s where anchors come in. There are a lot of ways to attach stuff to concrete. There are anchors that epoxy into holes, screw into place, or use wedges to expand into the hole. And of course, if you’re extra careful and precise, you can even embed anchor rods or bolts into the concrete while it’s still wet. There’s a huge variety of styles and materials that offer different advantages depending on your needs. Here’s just one manufacturer’s selection guide for the anchors and epoxies they provide. But like third year engineering students, all of those anchors can fail if they’re overloaded. And they can fail in a lot of different ways under tension or shear forces. The anchor rod itself can fracture or deform. It can lose its bond with the concrete and pull out. It can break out the surrounding concrete. Or if it’s too close to the edge, it can blow out the side. Calculating the strength of the anchor bolt and concrete connection against each of these failure modes is a lot more complicated than just dividing a force by a pressure to determine the baseplate area. So most engineers use software that can do the calculations automatically. But, there’s another challenge about baseplates I haven’t mentioned yet, and it has to do with tolerances. Concrete foundations can be pretty precise. As long as you set the forms accurately and make them strong enough to avoid deflection while the concrete is being placed, you can feel confident in the dimensions of the structure that comes out of them. But there’s usually one surface that isn’t formed: the top. Instead, we use screeds and trowels and floats to put a nice finish on the top surface of a concrete slab or pier. But it’s rarely perfect enough to put a column directly on top. That’s not to say it can’t be done. I’ve seen concrete finishing crews do amazing work. But it’s usually not worth the effort to get a concrete surface perfectly level at the exact elevation needed for every column, especially when you have the time pressure of concrete setting up. And those tolerances matter. Just one degree off of level will put a 16-foot or 5-meter column out of plumb by more than 3 inches or 80 millimeters. Unless you’re in certain parts of Tuscany, that’s not gonna work. It’s more than enough to misalign some bolt holes. And that only magnifies for taller columns like signpoles. So, we usually need some adjustability between the plate and the concrete. Sometimes that means shimming the baseplate to get it perfectly level. And the other primary option is to use leveling nuts underneath the plate. I welded up a custom-branded column and and baseplate that was laser-cut by my friends at Send-Cut-Send to show you how this works. These parts turned out so nice. By adjusting these nuts up or down, I can get the column to point in the exact direction required. And I can get it to the exact right elevation too. But maybe you see the problem here. All the work we did to make sure the baseplate distributes the vertical load even across its area is lost. Now the vertical loads are just being transferred through some shims or through the bolts directly into the anchors. So, in a lot of cases, we add grout between the plate and the concrete to bridge the gap. Grout is basically concrete without the large aggregate, mixed with a low viscosity so it flows more easily into gaps. And it often includes additives to prevent it from shrinking as it cures, making sure it doesn’t pull away from the surfaces above and below. When it hardens, the grout can transfer and distribute the loads into the foundation. So if you pay attention to baseplates you see out in the built environment, you’ll notice it’s pretty common that they sit on a little pedestal of grout and not directly on the concrete below. But even this comes with a few problems. First is load transfer. Even with the grout, some of the vertical loads are still going into the anchor bolts that might not have been designed for compression. So now we’ve added a few more new potential failure modes to the laundry list: punching through the bottom of a slab, and buckling of the rod itself. Sometimes contractors will use plastic leveling nuts that can hold the column during construction, but will yield after the column’s loaded so the grout supports all the weight. Second is fatigue. Especially for outdoor structures that see wind and vibrations, the grout under the baseplate might not hold up to repeated cycles of loading. Third is moisture. Grout can trap water, leading to problems with corrosion, especially for hollow columns like sign poles where condensation needs a way out. And the grout can hide that corrosion, making it difficult to inspect the structure. And fourth, adding grout below a baseplate is just an extra step. It’s kind of fiddly work to do it right, and it costs time and resources that might otherwise be spent somewhere else. In fact, there are a lot of cases where it’s an extra step worth skipping. You can design anchor bolts strong enough to withstand all the forces a column will apply, including the compressive forces downward. And you can design a base plate stiff enough that those forces don’t have to be distributed evenly across the entire area. And if you do, you have a standoff base plate. It just floats above the concrete with only the anchors in between. It looks like a counterintuitive design. We think of a baseplate as kind of a shoe, so it should be sitting on the ground. And a lot of them are designed that way. But for other structures, a baseplate is really just a way to connect a foundation to a column through an anchor. So if you pay attention, you’ll see these standoff baseplates everywhere. A lot of state highway departments have moved away from using grout to make signs and light poles easier to inspect. And they often install wire mesh to keep animals out from hollow masts. Clearly there’s a lot more to baseplates than meets the eye, and that means there’s also a few myths going around grout there. A common misconception is that standoff baseplates are meant to break away in the event of a collision. And I totally understand why. If an errant vehicle hits a signpost, a relatively minor deviation from the road can turn into a deadly crash. Smaller signs installed near roadways often do use breakaway hardware or features. You’ll often see holes drilled in wooden posts, bolts with narrow necks meant to snap easily, or slip bases like this one to make sure a sign gives way. But for larger structures like overhead signs and light poles, that’s generally not the case. Having one of these break away and fall across a highway could create an even bigger danger than having it stay upright. So, even though they might look similar, standoff baseplates are distinct from sign mounts designed to break loose in a collision. Instead, larger structures installed in the clear zones of highways are protected from crashes using a guardrail, barrier, or cushion. Baseplates are like bass parts in music, it’s easy to overlook them at first, but once you notice them, you can’t stop paying attention to how important a role they play. And just like bass lines, they might seem simple at first, but the deeper you dig, the more you realize how complex they really are.
[Note that this article is a transcript of the video embedded above.] In June of 2000, the power shut off across much of the San Francisco Bay area. There simply wasn’t enough electricity to meet demands, so more than a million customers were disconnected in California's largest load shed event since World War II. It was just one of the many rolling blackouts that hit the state in the early 2000s. Known as the Western Energy Crisis, the shortages resulted in blackouts, soaring electricity prices, and ultimately around 40 billion dollars in economic losses. But this time, the major cause of the issues had nothing to do with engineering. There were some outages and a lack of capacity from hydroelectric plants due to drought, but the primary cause of the disaster was economic. Power brokers (mainly Enron) were manipulating the newly de-regulated market for bulk electricity, forcing prices to skyrocket. Utilities were having to buy electricity at crazy prices, but there was a cap on how much they could charge their customers for the power. One utility, PG&E, lost so much money, it had to file for bankruptcy. And Southern California Edison almost met the same fate. Most of us pay an electric bill every month. It’s usually full of cryptic line items that have no meaning to us. The grid is not only mechanically and electrically complicated; it’s financially complicated, too. We don’t really participate in all that complexity - we just pay our bill at the end of every month. But it does affect us in big ways, so I think it’s important at least to understand the basics, especially because, if you’re like me, it’s really interesting stuff. I’m an engineer, I’m not an economist or finance expert. But, at least in the US, if you really want to understand how the power grid works, you can’t just focus on the volts and watts. You have to look at the dollars too. I’m Grady, and this is Practical Engineering. Electricity is not like any normal commodity we buy and sell. You can’t really go to the store and pick up a case of kilowatt-hours. It can’t really be stored or stockpiled on an industrial scale, which means it has to be created at essentially the exact instant it's needed. And the demand is fairly inelastic. We want our lights, stoves, air conditioners, and devices to turn on no matter the time of day. That requires the supply side to handle incredible volatility, ramping up or down to meet demands in real-time. And the whole business is incredibly capital-intensive: you need very expensive infrastructure for pretty much every step of the process. The only reason it can work is that we all share that infrastructure, spreading out the costs. Call me a nerd, but I think all of this creates some fascinating challenges, both on the technical side for engineers and the organizational side for the policymakers, regulators, and all the companies that participate in the electric power industry. It wasn’t that long ago that the electric utilities did it all. As the pros say, they were “vertically integrated.” Each utility owned and controlled the three major pieces of the grid within their service areas: generation (or power plants), transmission lines (which carry electricity at high voltage across long distances), and a distribution system (which delivers electricity to most customers at lower voltages). That meant they had a monopoly. Customers couldn’t choose where their power came from or how they got it. And that meant that electric utilities had to be carefully regulated to make sure that, without any competition, they were still offering customers a reasonable price for power. Over time, utilities realized the value of interconnecting so they could help each other in times of need. Electricity is a true commodity, even if it has some unusual properties. For the job it does, it mostly doesn’t really matter who made it - a kilowatt is a kilowatt, no matter where it came from. If one utility’s power plant went down or bad weather hit, they could work out a deal to share power with a neighbor and keep demand satisfied. As the practice grew more common, power pools developed where multiple utilities would interconnect and agree to share power. Every system is different; subject to different risks, different weather conditions, and outages at different times. It just made economic sense to spread out that variability and risk. Eventually, huge parts of North America were interconnected by transmission lines, creating the “grids” we know today. The major interconnections in the US and Canada are the Western, Eastern, Quebec, and Texas. Historically, the wholesale price one utility would pay another for power was regulated just like the rates utilities charged their retail customers. It was usually based on the actual cost of generating that power, so the big utilities couldn’t price gouge smaller companies. But a lot of that changed in the 1990s when the federal government opened the door for deregulation. The idea is simple on the surface: if power can move fairly freely on the grid, there’s no need for major utilities to be the only ones producing it, and there’s no need to regulate the prices for which it’s bought and sold. Let’s let market forces drive the decisions. It will increase competition and efficiency, driving down prices, and the investment risks will fall to the investors, not the customers. Quite a few states took the opportunity to deregulate the production of power, and quite a few didn’t. In fact, right now, it’s roughly half and half, but there’s a lot of variety between states when it comes to who produces power and how it’s bought and sold between utilities and other companies, and even big differences within individual states. In truth, the process of deregulation has been anything but simple, and actually created a whole new set of interesting challenges. Companies trying to game the system, like what happened in California, is only part of it. In fact, power professionals often say that certain states aren’t deregulated; they’re just differently regulated. But how does all this really work in practice? Let’s set up an analogy. Say I live on one side of a big lake. The water isn’t mine, but there is a water company on the other side. If I want to buy some water, they could load it on a truck and haul it to me. Or they could just put it in the lake, and I could take out the same amount. It’s probably not the same water, but it doesn’t really matter. In this analogy, water is water. Let’s scale it up. Now, hundreds of people need water, and hundreds of companies are selling it. Each person can hire any company they want to provide their water. The distance between buyer and seller doesn’t really matter. Every seller puts the amount of water they’ve sold in the lake, and each buyer takes as much out as they’ve bought. As long as everyone keeps track of how much they buy and sell, the lake stays full, and basic laws of physics will sort out how the actual water flows. In one way, you know exactly where you get your water: the company you paid to provide it. But in another way, you have no idea. All the water from all the companies is comingled in the lake. This is what happens on the grid. In a way, the power coming to your house comes from the power plant or plants that your utility paid to create it. But the electrons themselves probably didn’t. Just like the water in the lake, electricity flows according to the laws of physics from high potential to low, sloshing and flowing according to what everyone on the system is doing. This is a lot like how a deregulated grid works. Utilities that supply electricity to their customers don’t generate the power themselves. They enter contracts to get it from wholesale power providers, separate companies who only generate electricity. But you can see a challenge here. If I’m on my big lake wanting some water, I may not want to coordinate with every water company to see who’s got the best price, especially if my need for water varies day by day or even second by second, and honestly, I’m not even sure exactly how thirsty I’m going to be. And if I’m a water company on the lake, it’s a lot of overhead work to deal with all these customers and their different needs. It makes a lot more sense if there’s a marketplace. So it is on the grid. Like I mentioned, this varies quite a bit depending on where you are, so I’ll try to be as general as possible. Outside of those direct contracts between one buyer and one seller, most wholesale electricity in deregulated areas is bought and sold on the day-ahead market. Wholesale purchasers like utilities submit bids with estimates of how much electricity they’ll need for each hour of the next day. And generators submit their offers to sell a specific amount of electricity for a given price that’s based on their production costs, availability of fuel, and operational constraints. The facilitator of each wholesale market takes all the bids for every hour and matches the supply and demand to get the right amount of energy on the grid at the right times for the lowest cost. Here’s a basic example of a single hour of the auction: Let’s say four generators submit bids to provide electricity during this hour: A nuclear plant bids 1200 MW for a price of $20/MWh. A natural gas peaker plant bids 400 MW for $100/MWh. A coal plant bids 500 MW for $30/MWh. And a wind farm bids 400 MW for $0. Wind and solar can submit very low bids because they have no fuel costs. There’s pretty much no way for them to lose money if they’re connected to the grid, especially because many get outside incentives for every megawatt-hour they generate. They even submit negative bids in some cases, meaning they’re willing to pay money to stay connected to the grid. In any case, electricity generators in our hypothetical hour have offered 2,500 megawatts of power to the market. Let’s say purchasers submitted 2,000 megawatts of demand for this hour. We arrange the generation bids in order of least cost to satisfy demand. This concept is known as economic dispatch. We buy power at the lowest cost possible. We’re going to dispatch the wind farm and nuclear plant, dispatch the coal plant at 80% of the capacity they bid, and we don’t need the peaker plant at all. The clearing price is the cost of the last unit of supply to be dispatched. In this case, it’s $30 per megawatt-hour. Every producer gets paid that price for the power they put on the grid for that hour, even if they bid lower, and every buyer pays that price for wholesale electricity. This is why wind and solar are incentivized to bid 0 dollars. They essentially guarantee that they’ll make the cut. It seems like a simple process in our hypothetical hour, but in reality there’s a lot more to it. For one, many types of power plants can’t just be toggled on and off with the flip of a switch. They need significant lead time to start up and shut down. They have minimum and maximum output levels. And their costs can vary a lot depending on how long they run. So the market has to take those factors into consideration. Also, we can’t perfectly predict the future, even for the next day. There are always going to be differences in the day-ahead forecasts. Demand varies, equipment has problems, and other unforeseen events like sabotage or solar storms happen all the time. So another market runs in real-time, sometimes with auctions every 5 minutes, to make up those differences and keep the supply in check with demand. For example, if a wind farm overproduces what they bid into the day-ahead market, they can sell the extra on the real-time market. And if they underproduce, they may need to buy power in the real-time market to make up for the shortfall. And if things really get tight with not enough reserves, the real-time markets usually include a way to boost prices upward, even beyond what the clearing price would be, to make sure they’re more closely reflecting the actual value of electricity. That includes the cost to society if people lose power, or put another way, the cost they would be willing to pay to avoid a disruption in electrical service. This concept is called the value of lost load, and it’s something that the generators usually aren’t taking into account in their bids. But that’s not all the markets. Many areas have a capacity market intended to make sure there are enough generators available to meet demands over the long term. These auctions happen only once a year or so, and generators bid to create new capacity within three years. All the generators that win in the auction are rewarded for adding capacity to the grid, no matter how much of that capacity actually gets used in the future. This doesn’t happen everywhere though. Texas doesn’t use a capacity market and instead relies on prices in the day-ahead and real-time markets to encourage generating companies to make long-term investments in capacity Many areas also have markets for so-called ancillary services, basically services needed to keep the grid stable and reliable. There are auctions for regulation, which accounts for very short-term fluctuations in supply and demand to keep the frequency stable. There are also auctions for reserves that can keep plants ready to get on the grid quickly if another resource trips offline. Other services to keep the grid stable are often contracted directly instead of using auctions. Reliability-must-run contracts pay for power plants that are on the verge of retirement to stay in service until the capacity is replaced. Inertia services pay to keep a certain amount of rotating mass connected to the system. I have a video on that topic if you want to learn more. Black start contracts pay for some generators to have the ability to go from a total shutdown to operational without assistance from the grid. I also have a video on that topic. And reactive power contracts help maintain the stability of the voltage on the grid. And, I have a video on that one too. A potentially surprising thing about many of these markets is that it doesn’t just have to be generation resources bidding into them. The overall goal is just to get the supply to meet demand, and there are two ways to do that. You can increase the supply or decrease the demand. I said earlier that electricity demand is fairly inelastic, but there are a lot of situations where customers can reduce demand, especially if they’re compensated for doing it. Large industrial power users like refineries can shift schedules around or even turn on their own generators if resources on the grid are getting scarce. This is how you get wacky news stories about cryptocurrency miners making more money participating in electricity markets than in Bitcoin. There are even companies that will gather up a bunch of smaller power users who have some flexibility in their demands, package them up, and sell that demand reduction as a service in the wholesale electricity market. And some utilities coordinate similar demand response programs with their customers, offering credits on your bill if you have a smart thermostat. Deregulation of wholesale electricity markets just opens up this world of possibilities in how we manage the grid. But there is one big way my lake analogy from earlier breaks down. Because that lake symbolizes the transmission and distribution lines that carry power between the buyers and sellers. And in reality, they’re not really like a lake, but more like a series of interconnected canals. And they didn’t just appear. Someone has to build them and maintain them, often at great cost, so those costs need to be covered by the rates we pay for electricity on top of the generation. In this case, there’s really no way to deregulate those costs. It doesn’t make sense to build parallel, competing networks of transmission and distribution lines. It would cost too much, and we’d just have too many wires across the landscape. So regulators oversee the rates that transmission and distribution companies charge utilities to use their wires to move power between users and generators. And of course, there’s a whole host of complex financial systems in place to make this happen. Wholesale purchasers not only have to buy power they need and the power that will be lost along the way, but also reserve capacity on the transmission system for that power to travel, and pay the transmission and distribution system operators for the privilege. Confusingly, the flow of power isn’t really controlled on a line-by-line basis or sometimes even on a system-by-system basis. Power flows where it flows once it’s released on the grid, and there’s no simple way to keep track of who made it or who bought it at individual points on the network. Transmission reservations and tariffs are the law of the land, but the actual electrical power follows the laws of physics. So unlike at your house where you pay one-to-one for the actual power that flows through your meter, payments to transmission operators aren’t always a perfect reflection of how each buyer’s power moves through their system. Still, it’s the best mechanism we have to ensure electricity moves reliably across the grid and that the owners of the transmission assets are fairly compensated. The other thing is that those canals don’t have infinite capacity. They can only move so much water, just like the transmission system can only move so much power. So in managing the wholesale electricity market, you don’t only have to consider what’s the next cheapest source of power, but also whether you can actually get that power to where it needs to go. Grid operators have to account for congestion, like rush hour for electrons. They usually do this by allowing prices to vary from place to place, an idea called Locational Marginal Pricing. You can see on this map of Texas how significantly prices can vary across the state, reflecting a difference in where the demand is versus where the generators are and the congestion on the transmission system that results. And hopefully at this point you’re seeing how complicated all this really is. Grid operators have to take into consideration all these details - power flows, weather, limitations of every kind of generator, second-by-second changes in the system - in order to match supply with demand at the lowest cost possible. And it gets even more complicated when you add distributed generation sources, like home solar installations, that put energy on the grid from the other side of the meter. And this is only on the wholesale side of the grid. Even though most of those dollars moving around came out of our pockets, the end-users of the electricity, you and I really don’t participate in this segment of the grid. For many of us, the company we pay for electricity (the retail provider) didn’t generate that electricity, and in many cases, doesn’t own the infrastructure that it traveled along to reach our house or place of work. And for around a quarter of the US, the retail market is deregulated to the point where you can choose which company you buy your power from. So what do they actually do? In essence, retail providers just buy power on the wholesale market and sell it to you. They’re middlemen, the car dealerships of electricity. They navigate all that complexity we just discussed so you don’t have to. Retail providers all provide essentially the same thing, but they can differentiate themselves by offering different kinds of rates that suit their customers better. One provider in Texas, Griddy Energy, famously offered their customers the real-time wholesale price, exposing them to the incredible volatility of the market. Unsurprisingly, Griddy filed for bankruptcy after the winter storm in Texas when their customers couldn’t pay the exorbitant bills. The other thing retail providers can do is connect your dollars to specific sources of generation like renewables. Instead of buying power in the auction, where you have no control over the sources, they contract directly with wind, solar, and other generators to purchase it directly on your behalf. So next time you get your power bill, take a look at those line items. Maybe there’s a base rate set by your provider that covers all the various costs of operating the grid from generation to transmission to distribution. Or maybe they’re broken out according to all the various costs that it actually takes to run the bulk power system. Do you pay a separate rate for the distribution service? Does your bill have an adjustment for the variability in the wholesale market? Is there a charge for the Public Utility Commission or whatever agency oversees this whole financial web of complexity? Every bill looks a little different, but I hope this video clears up some misconceptions and encourages you to think about what the price you pay for electricity actually accomplishes on the grid.
More in science
My post last week clearly stimulated some discussion. I know people don't come here for political news, but as a professional scientist it's hard to ignore the chaotic present situation, so here are some things to read, before I talk about a fun paper: Science reports on what is happening with NSF. The short version: As of Friday afternoon, panels are delayed and funds (salary) are still not accessible for NSF postdoctoral fellows. Here is NPR's take. As of Friday afternoon, there is a new court order that specifically names the agency heads (including the NSF director), saying to disburse already approved funds according to statute. Looks like on this and a variety of other issues, we will see whether court orders actually compel actions anymore. Now to distract ourselves with dreams of the future, this paper was published in Nature Photonics, measuring radiation pressure exerted by a laser on a 50 nm thick silicon nitride membrane. The motivation is a grand one: using laser-powered light sails to propel interstellar probes up to a decent fraction (say 10% or more) of the velocity of light. It's easy to sketch out the basic idea on a napkin, and it has been considered seriously for decades (see this 1984 paper). Imagine a reflective sail say 10 m\(^{2}\) and 100 nm thick. When photons at normal incidence bounce from a reflective surface, they transfer momentum \(2\hbar \omega/c) normal to the surface. If the reflective surface is very thin and low mass, and you can bounce enough photons off it, you can get decent accelerations. Part of the appeal is, this is a spacecraft where you effectively keep the engine (the whopping laser) here at home and don't have to carry it with you. There are braking schemes so that you could try to slow the craft down when it reaches your favorite target system. A laser-powered lightsail (image from CalTech) Of course, actually doing this on a scale where it would be useful faces enormous engineering challenges (beyond building whopping lasers and operating them for years at a time with outstanding collimation and positioning). Reflection won't be perfect, so there will be heating. Ideally, you'd want a light sail that passively stabilizes itself in the center of the beam. In this paper, the investigators implement a clever scheme to measure radiation forces, and they test ideas involving dielectric gratings etched into the sail to generate self-stabilization. Definitely more fun to think about such futuristic ideas than to read the news. (An old favorite science fiction story of mine is "The Fourth Profession", by Larry Niven. The imminent arrival of an alien ship at earth is heralded by the appearance of a bright point in the sky, whose emission turns out to be the highly blue-shifted, reflected spectrum of the sun, bouncing off an incoming alien light sail. The aliens really need humanity to build them a launching laser to get to their next destination.)
Recent results show that large language models struggle with compositional tasks, suggesting a hard limit to their abilities. The post Chatbot Software Begins to Face Fundamental Limitations first appeared on Quanta Magazine
A tour of interesting developments built in the last two decades
Is that immigrant high-skilled or do they just have a fancy degree?
Everything, apparently, has a second life on TikTok. At least this keeps us skeptics busy – we have to redebunk everything we have debunked over the last century because it is popping up again on social media, confusing and misinforming another generation. This video is a great example – a short video discussing the “incorruptibility’ […] The post Incorruptible Skepticism first appeared on NeuroLogica Blog.