More from Blog - Practical Engineering
[Note that this article is a transcript of the video embedded above.] In the early 1900s, Seattle was a growing city hemmed in by geography. To the west was Puget Sound, a vital link to the Pacific Ocean. To the east, Lake Washington stood between the city and the farmland and logging towns of the Cascades. As the population grew, pressure mounted for a reliable east–west transportation route. But Lake Washington wasn’t easy to cross. Carved by glaciers, the lake is deceptively deep, over 200 feet or 60 meters in some places. And under that deep water sits an even deeper problem: a hundred-foot layer of soft clay and mud. Building bridge piers all the way to solid ground would have required staggeringly sized supports. The cost and complexity made it infeasible to even consider. But in 1921, an engineer named Homer Hadley proposed something radical: a bridge that didn’t rest on the bottom at all. Instead, it would float on massive hollow concrete pontoons, riding on the surface like a ship. It took nearly two decades for his idea to gain traction, but with the New Deal and Public Works Administration, new possibilities for transportation routes across the country began to open up. Federal funds flowed, and construction finally began on what would become the Lacey V. Murrow Bridge. When it opened in 1940, it was the first floating concrete highway of its kind, a marvel of engineering and a symbol of ingenuity under constraint. But floating bridges, by their nature, carry some unique vulnerabilities. And fifty years later, this span would be swallowed by the very lake it crossed. Between that time and since, the Seattle area has kind of become the floating concrete highway capital of the world. That’s not an official designation, at least not yet, but there aren’t that many of these structures around the globe. And four of the five longest ones on Earth are clustered in one small area of Washington state. You have Hood Canal, Evergreen Point, Lacey V Murrow, and its neighbor, the Homer M. Hadley Memorial Bridge, named for the engineer who floated the idea in the first place. Washington has had some high-profile failures, but also some remarkable successes, including a test for light rail transit over a floating bridge just last month in June 2025. It's a niche branch of engineering, full of creative solutions and unexpected stories. So I want to take you on a little tour of the hidden engineering behind them. I’m Grady, and this is Practical Engineering. Floating bridges are basically as old as recorded history. It’s not a complicated idea: place pontoons across a body of water, then span them with a deck. For thousands of years, this straightforward solution has provided a fast and efficient way to cross rivers and lakes, particularly in cases where permanent bridges were impractical or when the need for a crossing was urgent. In fact, floating bridges have been most widely used in military applications, going all the way back to Xerxes crossing the Dardanelles in 480 BCE. They can be made portable, quick to erect, flexible to a wide variety of situations, and they generally don’t require a lot of heavy equipment. There are countless designs that have been used worldwide in various military engagements. But most floating bridges, both ancient and modern, weren’t meant to last. They’re quick to put up, but also quick to take out, either on purpose or by Mother Nature. They provide the means to get in, get across, and get out. So they aren’t usually designed for extreme conditions. Transitioning from temporary military crossings to permanent infrastructure was a massive leap, and it brought with it a host of engineering challenges. An obvious one is navigation. A bridge that floats on the surface of the water is, by default, a barrier to boats. So, permanent floating bridges need to make room for maritime traffic. Designers have solved this in several ways, and Washington State offers a few good case studies. The Evergreen Point Floating Bridge includes elevated approach spans on either end, allowing ships to pass beneath before the road descends to water level. The original Lacey V. Murrow Bridge took a different approach. Near its center, a retractable span could be pulled into a pocket formed by adjacent pontoons, opening a navigable channel. But, not only did the movable span create interruptions to vehicle traffic on this busy highway, it also created awkward roadway curves that caused frequent accidents. The mechanism was eventually removed after the East Channel Bridge was replaced to increase its vertical clearance, providing boats with an alternative route between the two sides of Lake Washington. Further west, the Hood Canal Bridge incorporates truss spans for smaller craft. And it has hydraulic lift sections for larger ships. The US Naval Base Kitsap is not far away, so sometimes the bridge even has to open for Navy submarines. These movable spans can raise vertically above the pontoons, while adjacent bridge segments slide back underneath. The system is flexible: one side can be opened for tall but narrow vessels, or both for wider ships. But floating bridges don’t just have to make room for boats. In a sense, they are boats. Many historical spans literally floated on boats lashed together. And that comes with its own complications. Unlike fixed structures, floating bridges are constantly interacting with water: waves, currents, and sometimes even tides and ice. They’re easiest to implement on calm lakes or rivers with minimal flooding, but water is water, and it’s a totally different type of engineering when you’re not counting on firm ground to keep things in place. We don’t just stretch floating bridges across the banks and hope for the best. They’re actually moored in place, usually by long cables and anchors, to keep materials from overstressing and to prevent movements that would make the roadway uncomfortable or dangerous. Some anchors use massive concrete slabs placed on the lakebed. Others are tied to piles driven deep into the ground. In particularly deep water or soft soil, anchors are lowered to the bottom with water hoses that jet soil away, allowing the anchor to sink deep into the mud. These anchoring systems do double duty, providing both structural integrity and day-to-day safety for drivers, but even with them, floating bridges have some unique challenges. They naturally sit low to the water, which means that in high winds, waves can crash directly onto the roadway, obscuring the visibility and creating serious risks to road users. Motion from waves and wind can also cause the bridge to flex and shift beneath vehicles, especially unnerving for drivers unused to the sensation. In Washington State, all the major floating bridges have been closed at various times due to weather. The DOT enforces wind thresholds for each bridge; if the wind exceeds the threshold, the bridge is closed to traffic. Even if the bridge is structurally sound, these closures reflect the reality that in extreme weather, the bridge itself becomes part of the storm. But we still haven’t addressed the floating elephant in the pool here: the concrete pontoons themselves. Floating bridges have traditionally been made of wood or inflatable rubber, which makes sense if you’re trying to stay light and portable. But permanent infrastructure demands something more durable. It might seem counterintuitive to build a buoyant structure out of concrete, but it’s not as crazy as it sounds. In fact, civil engineering students compete every year in concrete canoe races hosted by the American Society of Civil Engineers. Actually, I was doing a little recreational math to find a way to make this intuitive, and I stumbled upon a fun little fact. If you want to build a neutrally buoyant, hollow concrete cube, there’s a neat rule of thumb you can use. Just take the wall thickness in inches, and that’s your outer dimension in feet. Want 12-inch-thick concrete walls? You’ll need a roughly 12-foot cube. This is only fun because of the imperial system, obviously. It’s less exciting to say that the two dimensions have a roughly linear relationship with a factor of 12. And I guess it’s not really that useful except that it helps to visualize just how feasible it is to make concrete float. Of course, real pontoons have to do more than just barely float themselves. They have to carry the weight of a deck and whatever crosses it with an acceptable margin of safety. That means they’re built much larger than a neutrally buoyant box. But mass isn’t the only issue. Concrete is a reliable material and if you’ve watched the channel for a while, you know that there are a few things you can count on concrete to do, and one of them is to crack. Usually not a big deal for a lot of structures, but that’s a pretty big problem if you’re trying to keep water out of a pontoon. Designers put enormous effort into preventing leaks. Modern pontoons are subdivided into sealed chambers. Watertight doors are installed between the chambers so they can still be accessed and inspected. Leak detection systems provide early warnings if anything goes wrong. And piping is pre-installed with pumps on standby, so if a leak develops, the chambers can be pumped dry before disaster strikes. The concrete recipe itself gets extra attention. Specialized mixes reduce shrinkage, improve water resistance, and resist abrasion. Even temperature control during curing matters. For the replacement of the Evergreen Point Bridge, contractors embedded heating pipes in the base slabs of the pontoons, allowing them to match the temperature of the walls as they were cast. This enabled the entire structure to cool down at a uniform rate, reducing thermal stresses that could lead to cracking. There were also errors during construction, though. A flaw in the post-tensioning system led to millions of dollars in change orders halfway through construction and delayed the project significantly while they worked out a repair. But there’s a good reason why they were so careful to get the designs right on that project. Of the four floating bridges in Washington state, two of them have sunk. In February 1979, a severe storm caused the western half of the Hood Canal Bridge to lose its buoyancy. Investigations revealed that open hatches allowed rain and waves to blow in, slowly filling the pontoons and ultimately leading to the western half of the bridge sinking. The DOT had to establish a temporary ferry service across the canal for nearly four years while the western span was rebuilt. Then, in 1990, it happened again. This time, the failure occurred during rehabilitation work on the Lacey V. Murrow Bridge while it was closed. Contractors were using hydrodemolition, high-pressure water jets, to remove old concrete from the road deck. Because the water was considered contaminated, it had to be stored rather than released into Lake Washington. Engineers calculated that the pontoon chambers could hold the runoff safely. To accommodate that, they removed the watertight doors that normally separated the internal compartments. But, when a storm hit over Thanksgiving weekend, water flooded into the open chambers. The bridge partially sank, severing cables on the adjacent Hadley Bridge and delaying the project by more than a year - a potent reminder that even small design or operational oversights can have major consequences on this type of structure. And we still have a lot to learn. Recently, Sound Transit began testing light rail trains on the Homer Hadley Bridge, introducing a whole new set of engineering puzzles. One is electricity. With power running through the rails, there was concern about stray currents damaging the bridge. To prevent this, the track is mounted on insulated blocks, with drip caps to prevent water from creating a conductive path. And then there’s the bridge movement. Unlike typical bridges, a floating bridge can roll, pitch, and yaw with weather, lake level, and traffic loads. The joints between the fixed shoreline and the bridge have to be able to accommodate movement. It’s usually not an issue for cars, trucks, bikes, or pedestrians, but trains require very precise track alignment. Engineers had to develop an innovative “track bridge” system. It uses specialized bearings to distribute every kind of movement over a longer distance, keeping tracks aligned even as the floating structure shifts beneath it. Testing in June went well, but there’s more to be done before you can ride the Link light rail across a floating highway. If floating bridges are the present, floating tunnels might be the future. I talked about immersed tube tunnels in a previous video. They’re used around the world, made by lowering precast sections to the seafloor and connecting them underwater. But what if, instead of resting on the bottom, those tunnels floated in the water column? It should be possible to suspend a tunnel with negative buoyancy using surface pontoons or even tether one with positive buoyancy to the bottom using anchors. In deep water, this could dramatically shorten tunnel lengths, reduce excavation costs, and minimize environmental impacts. Norway has actually proposed such a tunnel across a fjord on its western coast, a project that, if realized, would be the first of its kind. Like floating bridges before it, this tunnel will face a long list of unknowns. But that’s the essence of engineering: meeting each challenge with solutions tailored to a specific place and need. There aren’t many locations where floating infrastructure makes sense. The conditions have to be just right - calm waters, minimal ice, manageable tides. But where the conditions do allow, floating bridges and their hopefully future descendants open up new possibilities for connection, mobility, and engineering.
[Note that this article is a transcript of the video embedded above.] Wichita Falls, Texas, went through the worst drought in its history in 2011 and 2012. For two years in a row, the area saw its average annual rainfall roughly cut in half, decimating the levels in the three reservoirs used for the city’s water supply. Looking ahead, the city realized that if the hot, dry weather continued, they would be completely out of water by 2015. Three years sounds like a long runway, but when it comes to major public infrastructure projects, it might as well be overnight. Between permitting, funding, design, and construction, three years barely gets you to the starting line. So the city started looking for other options. And they realized there was one source of water nearby that was just being wasted - millions of gallons per day just being flushed down the Wichita River. I’m sure you can guess where I’m going with this. It was the effluent from their sewage treatment plant. The city asked the state regulators if they could try something that had never been done before at such a scale: take the discharge pipe from the wastewater treatment plant and run it directly into the purification plant that produces most of the city’s drinking water. And the state said no. So they did some more research and testing and asked again. By then, the situation had become an emergency. This time, the state said yes. And what happened next would completely change the way cities think about water. I’m Grady and this is Practical Engineering. You know what they say, wastewater happens. It wasn’t that long ago that raw sewage was simply routed into rivers, streams, or the ocean to be carried away. Thankfully, environmental regulations put a stop to that, or at least significantly curbed the amount of wastewater being set loose without treatment. Wastewater plants across the world do a pretty good job of removing pollutants these days. In fact, I have a series of videos that go through some of the major processes if you want to dive deeper after this. In most places, the permits that allow these plants to discharge set strict limits on contaminants like organics, suspended solids, nutrients, and bacteria. And in most cases, they’re individualized. The permit limits are based on where the effluent will go, how that water body is used, and how well it can tolerate added nutrients or pollutants. And here’s where you start to see the issue with reusing that water: “clean enough” is a sliding scale. Depending on how water is going to be used or what or who it’s going to interact with, our standards for cleanliness vary. If you have a dog, you probably know this. They should drink clean water, but a few sips of a mud puddle in a dirty street, and they’re usually just fine. For you, that might be a trip to the hospital. Natural systems can tolerate a pretty wide range of water quality, but when it comes to drinking water for humans, it should be VERY clean. So the easiest way to recycle treated wastewater is to use it in ways that don’t involve people. That idea’s been around for a while. A lot of wastewater treatment plants apply effluent to land as a disposal method, avoiding the need for discharge to a natural water body. Water soaks into the ground, kind of like a giant septic system. But that comes with some challenges. It only works if you’ve got a lot of land with no public access, and a way to keep the spray from drifting into neighboring properties. Easy at a small scale, but for larger plants, it just isn’t practical engineering. Plus, the only benefits a utility gets from the effluent are some groundwater recharge and maybe a few hay harvests per season. So, why not send the effluent to someone else who can actually put it to beneficial use? If only it were that simple. As soon as a utility starts supplying water to someone else, things get complicated because you lose a lot of control over how the effluent is used. Once it's out of your hands, so to speak, it’s a lot harder to make sure it doesn’t end up somewhere it shouldn’t, like someone’s mouth. So, naturally, the permitting requirements become stricter. Treatment processes get more complicated and expensive. You need regular monitoring, sampling, and laboratory testing. In many places in the world, reclaimed water runs in purple pipes so that someone doesn’t inadvertently connect to the lines thinking they’re potable water. In many cases, you need an agreement in place with the end user, making sure they’re putting up signs, fences, and other means of keeping people from drinking the water. And then you need to plan for emergencies - what to do if a pipe breaks, if the effluent quality falls below the standards, or if a cross-connection is made accidentally. It’s a lot of work - time, effort, and cost - to do it safely and follow the rules. And those costs have to be weighed against the savings that reusing water creates. In places that get a lot of rain or snow, it’s usually not worth it. But in many US states, particularly those in the southwest, this is a major strategy to reduce the demand on fresh water supplies. Think about all the things we use water for where its cleanliness isn’t that important. Irrigation is a big one - crops, pastures, parks, highway landscaping, cemeteries - but that’s not all. Power plants use huge amounts of water for cooling. Street sweeping, dust control. In nearly the entire developed world, we use drinking-quality water to flush toilets! You can see where there might be cases where it makes good sense to reclaim wastewater, and despite all the extra challenges, its use is fairly widespread. One of the first plants was built in 1926 at Grand Canyon Village which supplied reclaimed water to a power plant and for use in steam locomotives. Today, these systems can be massive, with miles and miles of purple pipes run entirely separate from the freshwater piping. I’ve talked about this a bit on the channel before. I used to live near a pair of water towers in San Antonio that were at two different heights above ground. That just didn’t make any sense until I realized they weren’t connected; one of them was for the reclaimed water system that didn’t need as much pressure in the lines. Places like Phoenix, Austin, San Antonio, Orange County, Irvine, and Tampa all have major water reclamation programs. And it’s not just a US thing. Abu Dhabi, Beijing, and Tel Aviv all have infrastructure to make beneficial use of treated municipal wastewater, just to name a few. Because of the extra treatment and requirements, many places put reclaimed water in categories based on how it gets used. The higher the risk of human contact, the tighter the pollutant limits get. For example, if a utility is just selling effluent to farmers, ranchers, or for use in construction, exposure to the public is minimal. Disinfecting the effluent with UV or chlorine may be enough to meet requirements. And often that’s something that can be added pretty simply to an existing plant. But many reclaimed water users are things like golf courses, schoolyards, sports fields, and industrial cooling towers, where people are more likely to be exposed. In those cases, you often need a sewage plant specifically designed for the purpose or at least major upgrades to include what the pros call tertiary treatment processes - ways to target pollutants we usually don’t worry about and improve the removal rates of the ones we do. These can include filters to remove suspended solids, chemicals that bind to nutrients, and stronger disinfection to more effectively kill pathogens. This creates a conundrum, though. In many cases, we treat wastewater effluent to higher standards than we normally would in order to reclaim it, but only for nonpotable uses, with strict regulations about human contact. But if it’s not being reclaimed, the quality standards are lower, and we send it downstream. If you know how rivers work, you probably see the inconsistency here. Because in many places, down the river, is the next city with its water purification plant whose intakes, in effect, reclaim that treated sewage from the people upstream. This isn’t theoretical - it’s just the reality of how humans interact with the water cycle. We’ve struggled with the problems it causes for ages. In 1906, Missouri sued Illinois in the Supreme Court when Chicago reversed their river, redirecting its water (and all the city’s sewage) toward the Mississippi River. If you live in Houston, I hate to break it to you, but a big portion of your drinking water comes from the flushes and showers in Dallas. There have been times when wastewater effluent makes up half of the flow in the Trinity River. But the question is: if they can do it, why can’t we? If our wastewater effluent is already being reused by the city downstream to purify into drinking water, why can’t we just keep the effluent for ourselves and do the same thing? And the answer again is complicated. It starts with what’s called an environmental buffer. Natural systems offer time to detect failures, dilute contaminants, and even clean the water a bit—sunlight disinfects, bacteria consume organic matter. That’s the big difference in one city, in effect, reclaiming water from another upstream. There’s nature in between. So a lot of water reclamation systems, called indirect potable reuse, do the same thing: you discharge the effluent into a river, lake, or aquifer, then pull it out again later for purification into drinking water. By then, it’s been diluted and treated somewhat by the natural systems. Direct potable reuse projects skip the buffer and pipe straight from one treatment plant to the next. There’s no margin for error provided by the environmental buffer. So, you have to engineer those same protections into the system: real-time monitoring, alarms, automatic shutdowns, and redundant treatment processes. Then there’s the issue of contaminants of emerging concern: pharmaceuticals, PFAS [P-FAS], personal care products - things that pass through people or households and end up in wastewater in tiny amounts. Individually, they’re in parts per billion or trillion. But when you close the loop and reuse water over and over, those trace compounds can accumulate. Many of these aren’t regulated because they’ve never reached concentrations high enough to cause concern, or there just isn’t enough knowledge about their effects yet. That’s slowly changing, and it presents a big challenge for reuse projects. They can be dealt with at the source by regulating consumer products, encouraging proper disposal of pharmaceuticals (instead of flushing them), and imposing pretreatment requirements for industries. It can also happen at the treatment plant with advanced technologies like reverse osmosis, activated carbon, advanced oxidation, and bio-reactors that break down micro-contaminants. Either way, it adds cost and complexity to a reuse program. But really, the biggest problem with wastewater reuse isn’t technical - it’s psychological. The so-called “yuck factor” is real. People don’t want to drink sewage. Indirect reuse projects have a big benefit here. With some nature in between, it’s not just treated wastewater; it’s a natural source of water with treated wastewater in it. It’s kind of a story we tell ourselves, but we lose the benefit of that with direct reuse: Knowing your water came from a toilet—even if it’s been purified beyond drinking water standards—makes people uneasy. You might not think about it, but turning the tap on, putting that water in a glass, and taking a drink is an enormous act of trust. Most of us don’t understand water treatment and how it happens at a city scale. So that trust that it’s safe to drink largely comes from seeing other people do it and past experience of doing it over and over and not getting sick. The issue is that, when you add one bit of knowledge to that relative void of understanding - this water came directly from sewage - it throws that trust off balance. It forces you not to rely not on past experience but on the people and processes in place, most of which you don’t understand deeply, and generally none of which you can actually see. It’s not as simple as just revulsion. It shakes up your entire belief system. And there’s no engineering fix for that. Especially for direct potable reuse, public trust is critical. So on top of the infrastructure, these programs also involve major public awareness campaigns. Utilities have to put themselves out there, gather feedback, respond to questions, be empathetic to a community’s values, and try to help people understand how we ensure water quality, no matter what the source is. But also, like I said, a lot of that trust comes from past experience. Not everyone can be an environmental engineer or licensed treatment plant operator. And let’s be honest - utilities can’t reach everyone. How many public meetings about water treatment have you ever attended? So, in many places, that trust is just going to have to be built by doing it right, doing it well, and doing it for a long time. But, someone has to be first. In the U.S., at least on the city scale, that drinking water guinea pig was Wichita Falls. They launched a massive outreach campaign, invited experts for tours, and worked to build public support. But at the end of the day, they didn’t really have a choice. The drought really was that severe. They spent nearly four years under intense water restrictions. Usage dropped to a third of normal demand, but it still wasn’t enough. So, in collaboration with state regulators, they designed an emergency direct potable reuse system. They literally helped write the rules as they went, since no one had ever done it before. After two months of testing and verification, they turned on the system in July 2014. It made national headlines. The project ran for exactly one year. Then, in 2015, a massive flood ended the drought and filled the reservoirs in just three weeks. The emergency system was always meant to be temporary. Water essentially went through three treatment plants: the wastewater plant, a reverse osmosis plant, and then the regular water purification plant. That’s a lot of treatment, which is a lot of expense, but they needed to have the failsafe and redundancy to get the state on board with the project. The pipe connecting the two plants was above ground and later repurposed for the city’s indirect potable reuse system, which is still in use today. In the end, they reclaimed nearly two billion gallons of wastewater as drinking water. And they did it with 100% compliance with the standards. But more importantly, they showed that it could be done, essentially unlocking a new branch on the skill tree of engineering that other cities can emulate and build on.
[Note that this article is a transcript of the video embedded above.] “The big black stacks of the Illium Works of the Federal Apparatus Corporation spewed acid fumes and soot over the hundreds of men and women who were lined up before the red-brick employment office.” That’s the first line of one of my favorite short stories, written by Kurt Vonnegut in 1955. It paints a picture of a dystopian future that, thankfully, didn’t really come to be, in part because of those stacks. In some ways, air pollution is kind of a part of life. I’d love to live in a world where the systems, materials and processes that make my life possible didn’t come with any emissions, but it’s just not the case... From the time that humans discovered fire, we’ve been methodically calculating the benefits of warmth, comfort, and cooking against the disadvantages of carbon monoxide exposure and particulate matter less than 2.5 microns in diameter… Maybe not in that exact framework, but basically, since the dawn of humanity, we’ve had to deal with smoke one way or another. Since, we can’t accomplish much without putting unwanted stuff into the air, the next best thing is to manage how and where it happens to try and minimize its impact on public health. Of course, any time you have a balancing act between technical issues, the engineers get involved, not so much to help decide where to draw the line, but to develop systems that can stay below it. And that’s where the smokestack comes in. Its function probably seems obvious; you might have a chimney in your house that does a similar job. But I want to give you a peek behind the curtain into the Illium Works of the Federal Apparatus Corporation of today and show you what goes into engineering one of these stacks at a large industrial facility. I’m Grady, and this is Practical Engineering. We put a lot of bad stuff in the air, and in a lot of different ways. There are roughly 200 regulated hazardous air pollutants in the United States, many with names I can barely pronounce. In many cases, the industries that would release these contaminants are required to deal with them at the source. A wide range of control technologies are put into place to clean dangerous pollutants from the air before it’s released into the environment. One example is coal-fired power plants. Coal, in particular, releases a plethora of pollutants when combusted, so, in many countries, modern plants are required to install control systems. Catalytic reactors remove nitrous oxides. Electrostatic precipitators collect particulates. Scrubbers use lime (the mineral, not the fruit) to strip away sulfur dioxide. And I could go on. In some cases, emission control systems can represent a significant proportion of the costs involved in building and operating a plant. But these primary emission controls aren’t always feasible for every pollutant, at least not for 100 percent removal. There’s a very old saying that “the solution to pollution is dilution.” It’s not really true on a global scale. Case in point: There’s no way to dilute the concentration of carbon dioxide in the atmosphere, or rather, it’s already as dilute as it’s going to get. But, it can be true on a local scale. Many pollutants that affect human health and the environment are short-lived; they chemically react or decompose in the atmosphere over time instead of accumulating indefinitely. And, for a lot of chemicals, there are concentration thresholds below which the consequences on human health are negligible. In those cases, dilution, or really dispersion, is a sound strategy to reduce their negative impacts, and so, in some cases, that’s what we do, particularly at major point sources like factories and power plants. One of the tricks to dispersion is that many plumes are naturally buoyant. Naturally, I’m going to use my pizza oven to demonstrate this. Not all, but most pollutants we care about are a result of combustion; burning stuff up. So the plume is usually hot. We know hot air is less dense, so it naturally rises. And the hotter it is, the faster that happens. You can see when I first start the fire, there’s not much air movement. But as the fire gets hotter in the oven, the plume speeds up, ultimately rising higher into the air. That’s the whole goal: get the plume high above populated areas where the pollutants can be dispersed to a minimally-harmful concentration. It sounds like a simple solution - just run our boilers and furnaces super hot to get enough buoyancy for the combustion products to disperse. The problem with the solution is that the whole reason we combust things is usually to recover the heat. So if you’re sending a lot of that heat out of the system, just because it makes the plume disperse better, you’re losing thermodynamic efficiency. It’s wasteful. That’s where the stack comes in. Let me put mine on and show you what I mean. I took some readings with the anemometers with the stack on and off. The airspeed with the stack on was around double with it off. About a meter per second compared with two. But it’s a little tougher to understand why. It’s intuitive that as you move higher in a column of fluid, the pressure goes down (since there’s less weight of the fluid above). The deeper you dive in a pool, the more pressure you feel. The higher you fly in a plane or climb a mountain, the lower the pressure. The slope of that line is proportional to a fluid’s density. You don’t feel much of a pressure difference climbing a set of stairs because air isn’t very dense. If you travel the same distance in water, you’ll definitely notice the difference. So let’s look at two columns of fluid. One is the ambient air and the other is the air inside a stack. Since it’s hotter, the air inside the stack is less dense. Both columns start at the same pressure at the bottom, but the higher you go, the more the pressure diverges. It’s kind of like deep sea diving in reverse. In water, the deeper you go into the dense water, the greater the pressure you feel. In a stack, the higher you are in a column of hot air, the more buoyant you feel compared to the outside air. This is the genius of a smoke stack. It creates this difference in pressure between the inside and outside that drives greater airflow for a given temperature. Here’s the basic equation for a stack effect. I like to look at equations like this divided into what we can control and what we can’t. We don’t get to adjust the atmospheric pressure, the outside temperature, and this is just a constant. But you can see, with a stack, an engineer now has two knobs to turn: the temperature of the gas inside and the height of the stack. I did my best to keep the temperature constant in my pizza oven and took some airspeed readings. First with no stack. Then with the stock stack. Then with a megastack. By the way, this melted my anemometer; should have seen that coming. Thankfully, I got the measurements before it melted. My megastack nearly doubled the airspeed again at around three-and-a-half meters per second versus the two with just the stack that came with the oven. There’s something really satisfying about this stack effect to me. No moving parts or fancy machinery. Just put a longer pipe and you’ve fundamentally changed the physics of the whole situation. And it’s a really important tool in the environmental engineer’s toolbox to increase airflow upward, allowing contaminants to flow higher into the atmosphere where they can disperse. But this is not particularly revolutionary… unless you’re talking about the Industrial Revolution. When you look at all the pictures of the factories in the 19th century, those stacks weren’t there to improve air quality, if you can believe it. The increased airflow generated by a stack just created more efficient combustion for the boilers and furnaces. Any benefits to air quality in the cities were secondary. With the advent of diesel and electric motors, we could use forced drafts, reducing the need for a tall stack to increase airflow. That was kind of the decline of the forests of industrial chimneys that marked the landscape in the 19th century. But they’re obviously not all gone, because that secondary benefit of air quality turned into the primary benefit as environmental rules about air pollution became stricter. Of course, there are some practical limits that aren’t taken into account by that equation I showed. The plume cools down as it moves up the stack to the outside, so its density isn’t constant all the way up. I let my fire die down a bit so it wouldn’t melt the thermometer (learned my lesson), and then took readings inside the oven and at the top of the stack. You can see my pizza oven flue gas is around 210 degrees at the top of the mega-stack, but it’s roughly 250 inside the oven. After the success of the mega stack on my pizza oven, I tried the super-mega stack with not much improvement in airflow: about 4 meters per second. The warm air just got too cool by the time it reached the top. And I suspect that frictional drag in the longer pipe also contributed to that as well. So, really, depending on how insulating your stack is, our graph of height versus pressure actually ends up looking like this. And this can be its own engineering challenge. Maybe you’ve gotten back drafts in your fireplace at home because the fire wasn’t big or hot enough to create that large difference in pressure. You can see there are a lot of factors at play in designing these structures, but so far, all we’ve done is get the air moving faster. But that’s not the end goal. The purpose is to reduce the concentration of pollutants that we’re exposed to. So engineers also have to consider what happens to the plume once it leaves the stack, and that’s where things really get complicated. In the US, we have National Ambient Air Quality Standards that regulate six so-called “criteria” pollutants that are relatively widespread: carbon monoxide, lead, nitrogen dioxide, ozone, particulates, and sulfur dioxide. We have hard limits on all these compounds with the intention that they are met at all times, in all locations, under all conditions. Unfortunately, that’s not always the case. You can go on EPA’s website and look at the so-called “non-attainment” areas for the various pollutants. But we do strive to meet the standards through a list of measures that is too long to go into here. And that is not an easy thing to do. Not every source of pollution comes out of a big stationary smokestack where it’s easy to measure and control. Cars, buses, planes, trucks, trains, and even rockets create lots of contaminants that vary by location, season, and time of day. And there are natural processes that contribute as well. Forests and soil microbes release volatile organic compounds that can lead to ozone formation. Volcanic eruptions and wildfires release carbon monoxide and sulfur dioxide. Even dust storms put particulates in the air that can travel across continents. And hopefully you’re seeing the challenge of designing a smoke stack. The primary controls like scrubbers and precipitators get most of the pollutants out, and hopefully all of the ones that can’t be dispersed. But what’s left over and released has to avoid pushing concentrations above the standards. That design has to work within the very complicated and varying context of air chemistry and atmospheric conditions that a designer has no control over. Let me show you a demo. I have a little fog generator set up in my garage with a small fan simulating the wind. This isn’t a great example because the airflow from the fan is pretty turbulent compared to natural winds. You occasionally get some fog at the surface, but you can see my plume mainly stays above the surface, dispersing as it moves with the wind. But watch what happens when I put a building downstream. The structure changes the airflow, creating a downwash effect and pulling my plume with it. Much more frequently you see the fog at the ground level downstream. And this is just a tiny example of how complex the behavior of these plumes can be. Luckily, there’s a whole field of engineering to characterize it. There are really just two major transport processes for air pollution. Advection describes how contaminants are carried along by the wind. Diffusion describes how those contaminants spread out through turbulence. Gravity also affects air pollution, but it doesn’t have a significant effect except on heavier-than-air particulates. With some math and simplifications of those two processes, you can do a reasonable job predicting the concentration of any pollutant at any point in space as it moves and disperses through the air. Here’s the basic equation for that, and if you’ll join me for the next 2 hours, we’ll derive this and learn the meaning of each term… Actually, it might take longer than that, so let’s just look at a graphic. You can see that as the plume gets carried along by the wind, it spreads out in what’s basically a bell curve, or gaussian distribution, in the planes perpendicular to the wind direction. But even that is a bit too simplified to make any good decisions with, especially when the consequences of getting it wrong are to public health. A big reason for that is atmospheric stability. And this can make things even more complicated, but I want to explain the basics, because the effect on plumes of gas can be really dramatic. You probably know that air expands as it moves upward; there’s less pressure as you go up because there is less air above you. And as any gas expands, it cools down. So there’s this relationship between height and temperature we call the adiabatic lapse rate. It’s about 10 degrees Celsius for every kilometer up or about 28 Fahrenheit for every mile up. But the actual atmosphere doesn’t always follow this relationship. For example, rising air parcels can cool more slowly than the surrounding air. This makes them warmer and less dense, so they keep rising, promoting vertical motion in a positive feedback loop called atmospheric instability. You can even get a temperature inversion where you have cooler air below warmer air, something that can happen in the early morning when the ground is cold. And as the environmental lapse rate varies from the adiabatic lapse rate, the plumes from stacks change. In stable conditions, you usually get a coning plume, similar to what our gaussian distribution from before predicts. In unstable conditions, you get a lot of mixing, which leads to a looping plume. And things really get weird for temperature inversions because they basically act like lids for vertical movement. You can get a fanning plume that rises to a point, but then only spreads horizontally. You can also get a trapping plume, where the air gets stuck between two inversions. You can have a lofting plume, where the air is above the inversion with stable conditions below and unstable conditions above. And worst of all, you can have a fumigating plume when there are unstable conditions below an inversion, trapping and mixing the plume toward the ground surface. And if you pay attention to smokestacks, fires, and other types of emissions, you can identify these different types of plumes pretty easily. Hopefully you’re seeing now how much goes into this. Engineers have to keep track of the advection and diffusion, wind speed and direction, atmospheric stability, the effects of terrain and buildings on all those factors, plus the pre-existing concentrations of all the criteria pollutants from other sources, which vary in time and place. All that to demonstrate that your new source of air pollution is not going to push the concentrations at any place, at any time, under any conditions, beyond what the standards allow. That’s a tall order, even for someone who loves gaussian distributions. And often the answer to that tall order is an even taller smokestack. But to make sure, we use software. The EPA has developed models that can take all these factors into account to simulate, essentially, what would happen if you put a new source of pollution into the world and at what height. So why are smokestacks so tall? I hope you’ll agree with me that it turns out to be a pretty complicated question. And it’s important, right? These stacks are expensive to build and maintain. Those costs trickle down to us through the costs of the products and services we buy. They have a generally negative visual impact on the landscape. And they have a lot of other engineering challenges too, like resonance in the wind. And on the other hand, we have public health, arguably one of the most critical design criteria that can exist for an engineer. It’s really important to get this right. I think our air quality regulations do a lot to make sure we strike a good balance here. There are even rules limiting how much credit you can get for building a stack higher for greater dispersion to make sure that we’re not using excessively tall stacks in lieu of more effective, but often more expensive, emission controls and strategies. In a perfect world, none of the materials or industrial processes that we rely on would generate concentrated plumes of hazardous gases. We don’t live in that perfect world, but we are pretty fortunate that, at least in many places on Earth, air quality is something we don’t have to think too much about. And to thank for it, we have a relatively small industry of environmental professionals who do think about it, a whole lot. You know, for a lot of people, this is their whole career; what they ponder from 9-5 every day. Something most of us would rather keep out of mind, they face it head-on, developing engineering theories, professional consensus, sensible regulations, modeling software, and more - just so we can breathe easy.
[Note that this article is a transcript of the video embedded above.] The original plan to get I-95 over the Baltimore Harbor was a double-deck bridge from Fort McHenry to Lazaretto Point. The problem with the plan was this: the bridge would have to be extremely high so that large ships could pass underneath, dwarfing and overshadowing one of the US’s most important historical landmarks. Fort McHenry famously repelled a massive barrage and attack from the British Navy in the War of 1812, and inspired what would later become the national anthem. An ugly bridge would detract from its character, and a beautiful one would compete for it. So they took the high road by building a low road and decided to go underneath the harbor instead. Rather than bore a tunnel through the soil and rock below like the Channel Tunnel, the entire thing was prefabricated in sections and installed from the water surface above - a construction technique called immersed tube tunneling. This seems kind of simple at first, but the more you think about it, the more you realize how complicated it actually is to fabricate tunnel sections the length of a city block, move them into place, and attach them together so watertight and safe that, eventually, you can drive or take a train from one side to the other. Immersed tube construction makes tunneling less like drilling a hole and more like docking a spacecraft. Materials and practices vary across the world, but I want to try and show you, at least in a general sense, how this works. I’m Grady, and this is Practical Engineering. One of the big problems with bridges over navigable waterways is that they have to be so tall. Building high up isn’t necessarily the challenge; it’s getting up and back down. There are limits to how steep a road can be for comfort, safety, and efficiency, and railroads usually have even stricter constraints on grade. That means the approaches to high bridges have to be really long, increasing costs and, in dense cities, taking up more valuable space. This is one of the ways that building a tunnel can be a better option; They greatly reduce the amount of land at the surface needed for approaches. But traditional tunnels built using boring have to be installed somewhat deep into the ground, maintaining significant earth between the roof of the tunnel and the water for stability and safety. Since they’re installed from above, immersed tube tunnels don’t have the same problem. It’s basically a way to get the shortest tunnel possible for a given location, which often means the cheapest tunnel too. That’s a big deal, because tunnels are just about the most expensive way to get from point A to point B. Anything you can do to reduce their size goes a long way. And there are other advantages too. Tunnel boring machines make one shape: a circle. It’s not the best shape for a tunnel, in a lot of ways. Often there’s underutilized space at the top and bottom - excavation you had to perform because of the machinery that is mostly just a waste. Immersed tubes can be just about any shape you need, making them ideal for wider tunnels like combined road and rail routes where a circular cross-section isn’t a good fit. One of the other benefits of immersed tubes is that most of the construction happens on dry land. I probably don’t have to say this, but building stuff while underground or underwater is complex and difficult work. It requires specialty equipment, added safety measures, and a lot of extra expense. Immersed tube sections are built in dry docks or at a shipyard where it's much easier to deliver materials and accomplish the bulk of the actual construction work. Once tunnel sections are fabricated, they have to be moved into place, and I think this is pretty clever. These sections can be enormous - upwards of 650 feet or 200 meters long. But they’re still mostly air. So if you put a bulkhead on either side to trap that air inside, they float. You can just flood the dry dock, hook up some tugboats, and tow them out like a massive barge. Interestingly, the transportation method means that the tunnel segments have to be designed to work as a watercraft first. The weight, buoyancy, and balance of each section are engineered to keep them stable in the water and avoid tipping or rolling before they have to be stable as a structure. Once in place, a tunnel segment is handed over to the apparatus that will set it into place. In most cases, this is a catamaran-style behemoth called a lay barge. Two working platforms are connected by girders, creating a huge floating gantry crane. Internal tanks are filled with water to act as ballast, allowing the segment to sink. But when it gets to the bottom, it doesn’t just sit on the sea or channel floor below. And this is another benefit of immersed tube construction. Especially in navigable waterways, you need to protect a tunnel from damage from strong currents, curious sea life, and ship anchors. So most immersed tube tunnels sit in a shallow trench, excavated using a clamshell or suction dredger. Most waterways have a thick layer of soft sediment at the surface - not exactly ideal as a foundation. This is another reason most boring machines have to be in deeper material. Drilling through soft sediment is prone to problems. Imagine using a power drill to make a nice, clean hole through pudding. But, at least in part due to being full of buoyant air, immersed tubes aren’t that heavy; in fact, in most cases, they’re lighter than the soil that was there in the first place, so the soft sediment really isn’t a problem. You don’t need a complicated foundation. In many cases, it’s just a layer of rock or gravel placed at the bottom of the trench, usually using a fall pipe (like a big garden hose for gravel) to control the location. This layer is then carefully leveled using a steel screed that is dragged over the top like an underwater bulldozer. Even in deep water, the process can achieve a remarkably accurate surface level for the tunnel segments to rest on. The lowering process is the most delicate and important part of construction. The margins are tight because any type of misalignment may make it impossible for the segment to seal against its neighbor. Normally, you’d really want to take your time with this kind of thing, but here, the work usually has to happen in a narrow window to avoid weather, tides, and disruption to ship traffic. The tunnel section is fitted with rubber seals around its face, creating a gasket. Sometimes, the segment will also have a surveying tower that pokes above the water surface, allowing for measurements and fine adjustments to be made as it’s set into place. In some cases, the lowering equipment can also nudge the segment against its neighbor. In other cases, hydraulic jacks are used to pull the segments together. Divers or remotely operated submersibles can hook up the jacks. Or couplers, just like those used on freight trains, can do it without any manual underwater intervention. The jacks extend to couple the free segment to the one already installed, then retract to pull them together, compressing the gasket and sealing the area between the two bulkheads. This joint is the most important part of an immersed tunnel design. It has to be installed blindly and accommodate small movements from temperature changes, settlement, and changes in pressure as water levels go up and down. The gasket provides the initial seal, but there’s more to it. Once in place, valves are opened in the bulkheads to drain the water between them. That actually creates a massive pressure difference between one side of the segment and the other. Hydrostatic force from the water pushes against the end of the tunnel, putting it in even firmer contact with its neighbor and creating a stronger seal. Once in its final place, the segment can be backfilled. The tunnel segment connection is not like a pipe flange, where the joints are securely bolted together, completely restraining any movement. The joints on immersed tunnels have some freedom to move. Of course, there is a restraint for axial compression since the segments butt up against each other. In addition, keys or dowels are usually installed along the joint so that shear forces can transfer between segments, keeping the ends from shifting during settlement or small sideways movements. However, the joints aren’t designed to transfer torque, called moments. And there’s rarely much mechanical restraint to axial tension that might pull one joint away from the other. So you can see why the backfill is so important. It locks each segment into place. In fact, the first layer of backfill is called locking fill for that exact reason. I don’t think they make underwater roller compactors, and you wouldn’t want strong vibrations disturbing the placement of the tunnel segments anyway. So this material is made from angular rock that self-compacts and is placed using fall pipes in careful layers to secure each segment without shifting or disturbing it. After that, general backfill - maybe even the original material if it wasn’t contaminated - can be used in the rest of the trench, and then a layer is placed over the top of everything to protect the backfill and tunnel against currents caused by ships and tides. Sometimes this top layer includes bands of large rock meant to release a ship’s anchor from the bottom, keeping it from digging in and damaging the tunnel. Once a tunnel segment is secured in place, the bulkhead in the previous segment can be removed from the inside, allowing access inside the joint. The usual requirement is that access is only allowed when there are two or more bulkheads between workers and the water outside. A second seal, called an omega seal (because of its shape), then gets installed around the perimeter of the joint. And the process keeps going, adding segments to the tunnel until it’s a continuous, open path from one end to the other. When it reaches that point, all the other normal tunnel stuff can be installed, like roadways, railways, lights, ventilation, drainage, and pumps. By the time it’s ready to travel through, there’s really no obvious sign from inside that immersed tube tunnels are any different than those built using other methods. This is a simplification, of course. Every one of these steps is immensely complicated, unique to each jobsite, and can take weeks to months, to even years to complete. And as impressive as the process is, it’s not without its downsides. The biggest one is damage to the sea or river floor during construction. Where boring causes little disturbance at the surface, immersed tube construction requires a lot of dredging. That can disrupt and damage important habitat for wildlife. It also kicks up a lot of sediment into suspension, clouding the water and potentially releasing buried contaminants that were laid down back when environmental laws were less strict. Some of these impacts can be mitigated: Sealed clamshell buckets reduce turbidity and mobilization of contaminated sediment. And construction activities can be scheduled to avoid sensitive periods like migration of important species. But some level of disturbance is inevitable and has to be weighed against the benefits of the project. Despite the challenges, around 150 of these tunnels have been built around the globe. Some of the most famous include the Øresund Link between Denmark and Sweden, the Busan-Geoje tunnel in South Korea, the Marmaray tunnel crossing the Bosphorus in Turkey, of course, the Fort McHenry tunnel in Baltimore I mentioned earlier, and the BART Transbay Tube between Oakland and San Francisco. And some of the most impressive projects are under construction now, including the Fehmarn Belt between Denmark and Germany, which will be the world’s longest immersed tunnel. My friend Fred produced a really nice documentary about that project on The B1M channel if you want to learn more about it, and the project team graciously shared a lot of very cool clips used in this video too. There’s something about immersed tube tunnels that I can’t quite get over. At a glance, it’s dead simple - basically like assembling lego blocks. But the reality is that the process is so complicated and intricate, more akin to building a moon base. Giant concrete and steel segments floated like ships, carefully sunk into enormous trenches, precisely maneuvered for a perfect fit while completely submerged in sometimes high-traffic areas of the sea, with tides, currents, wildlife, and any number of unexpected marine issues that could pop up. And then you just drive through it like it’s any old section of highway. I love that stuff.
More in science
It has long been understood that clearcutting forests leads to more runoff, worsening flooding. But a new study finds that logging can reshape watersheds in surprising ways, leading to dramatically more flooding in some forests, while having little effect on others. Read more on E360 →
A team of mathematicians based in Vienna is developing tools to extend the scope of general relativity. The post A New Geometry for Einstein’s Theory of Relativity first appeared on Quanta Magazine
Achieving a 10-minute warning would save thousands of lives
Glacial ice offers a detailed record of the atmosphere, preserved in discrete layers, providing researchers with a valuable tool for studying human history. A sample taken from a glacier in the European Alps dates back at least 12,000 years, making it the oldest ice yet recovered in the region. Read more on E360 →
[Note that this article is a transcript of the video embedded above.] In the early 1900s, Seattle was a growing city hemmed in by geography. To the west was Puget Sound, a vital link to the Pacific Ocean. To the east, Lake Washington stood between the city and the farmland and logging towns of the Cascades. As the population grew, pressure mounted for a reliable east–west transportation route. But Lake Washington wasn’t easy to cross. Carved by glaciers, the lake is deceptively deep, over 200 feet or 60 meters in some places. And under that deep water sits an even deeper problem: a hundred-foot layer of soft clay and mud. Building bridge piers all the way to solid ground would have required staggeringly sized supports. The cost and complexity made it infeasible to even consider. But in 1921, an engineer named Homer Hadley proposed something radical: a bridge that didn’t rest on the bottom at all. Instead, it would float on massive hollow concrete pontoons, riding on the surface like a ship. It took nearly two decades for his idea to gain traction, but with the New Deal and Public Works Administration, new possibilities for transportation routes across the country began to open up. Federal funds flowed, and construction finally began on what would become the Lacey V. Murrow Bridge. When it opened in 1940, it was the first floating concrete highway of its kind, a marvel of engineering and a symbol of ingenuity under constraint. But floating bridges, by their nature, carry some unique vulnerabilities. And fifty years later, this span would be swallowed by the very lake it crossed. Between that time and since, the Seattle area has kind of become the floating concrete highway capital of the world. That’s not an official designation, at least not yet, but there aren’t that many of these structures around the globe. And four of the five longest ones on Earth are clustered in one small area of Washington state. You have Hood Canal, Evergreen Point, Lacey V Murrow, and its neighbor, the Homer M. Hadley Memorial Bridge, named for the engineer who floated the idea in the first place. Washington has had some high-profile failures, but also some remarkable successes, including a test for light rail transit over a floating bridge just last month in June 2025. It's a niche branch of engineering, full of creative solutions and unexpected stories. So I want to take you on a little tour of the hidden engineering behind them. I’m Grady, and this is Practical Engineering. Floating bridges are basically as old as recorded history. It’s not a complicated idea: place pontoons across a body of water, then span them with a deck. For thousands of years, this straightforward solution has provided a fast and efficient way to cross rivers and lakes, particularly in cases where permanent bridges were impractical or when the need for a crossing was urgent. In fact, floating bridges have been most widely used in military applications, going all the way back to Xerxes crossing the Dardanelles in 480 BCE. They can be made portable, quick to erect, flexible to a wide variety of situations, and they generally don’t require a lot of heavy equipment. There are countless designs that have been used worldwide in various military engagements. But most floating bridges, both ancient and modern, weren’t meant to last. They’re quick to put up, but also quick to take out, either on purpose or by Mother Nature. They provide the means to get in, get across, and get out. So they aren’t usually designed for extreme conditions. Transitioning from temporary military crossings to permanent infrastructure was a massive leap, and it brought with it a host of engineering challenges. An obvious one is navigation. A bridge that floats on the surface of the water is, by default, a barrier to boats. So, permanent floating bridges need to make room for maritime traffic. Designers have solved this in several ways, and Washington State offers a few good case studies. The Evergreen Point Floating Bridge includes elevated approach spans on either end, allowing ships to pass beneath before the road descends to water level. The original Lacey V. Murrow Bridge took a different approach. Near its center, a retractable span could be pulled into a pocket formed by adjacent pontoons, opening a navigable channel. But, not only did the movable span create interruptions to vehicle traffic on this busy highway, it also created awkward roadway curves that caused frequent accidents. The mechanism was eventually removed after the East Channel Bridge was replaced to increase its vertical clearance, providing boats with an alternative route between the two sides of Lake Washington. Further west, the Hood Canal Bridge incorporates truss spans for smaller craft. And it has hydraulic lift sections for larger ships. The US Naval Base Kitsap is not far away, so sometimes the bridge even has to open for Navy submarines. These movable spans can raise vertically above the pontoons, while adjacent bridge segments slide back underneath. The system is flexible: one side can be opened for tall but narrow vessels, or both for wider ships. But floating bridges don’t just have to make room for boats. In a sense, they are boats. Many historical spans literally floated on boats lashed together. And that comes with its own complications. Unlike fixed structures, floating bridges are constantly interacting with water: waves, currents, and sometimes even tides and ice. They’re easiest to implement on calm lakes or rivers with minimal flooding, but water is water, and it’s a totally different type of engineering when you’re not counting on firm ground to keep things in place. We don’t just stretch floating bridges across the banks and hope for the best. They’re actually moored in place, usually by long cables and anchors, to keep materials from overstressing and to prevent movements that would make the roadway uncomfortable or dangerous. Some anchors use massive concrete slabs placed on the lakebed. Others are tied to piles driven deep into the ground. In particularly deep water or soft soil, anchors are lowered to the bottom with water hoses that jet soil away, allowing the anchor to sink deep into the mud. These anchoring systems do double duty, providing both structural integrity and day-to-day safety for drivers, but even with them, floating bridges have some unique challenges. They naturally sit low to the water, which means that in high winds, waves can crash directly onto the roadway, obscuring the visibility and creating serious risks to road users. Motion from waves and wind can also cause the bridge to flex and shift beneath vehicles, especially unnerving for drivers unused to the sensation. In Washington State, all the major floating bridges have been closed at various times due to weather. The DOT enforces wind thresholds for each bridge; if the wind exceeds the threshold, the bridge is closed to traffic. Even if the bridge is structurally sound, these closures reflect the reality that in extreme weather, the bridge itself becomes part of the storm. But we still haven’t addressed the floating elephant in the pool here: the concrete pontoons themselves. Floating bridges have traditionally been made of wood or inflatable rubber, which makes sense if you’re trying to stay light and portable. But permanent infrastructure demands something more durable. It might seem counterintuitive to build a buoyant structure out of concrete, but it’s not as crazy as it sounds. In fact, civil engineering students compete every year in concrete canoe races hosted by the American Society of Civil Engineers. Actually, I was doing a little recreational math to find a way to make this intuitive, and I stumbled upon a fun little fact. If you want to build a neutrally buoyant, hollow concrete cube, there’s a neat rule of thumb you can use. Just take the wall thickness in inches, and that’s your outer dimension in feet. Want 12-inch-thick concrete walls? You’ll need a roughly 12-foot cube. This is only fun because of the imperial system, obviously. It’s less exciting to say that the two dimensions have a roughly linear relationship with a factor of 12. And I guess it’s not really that useful except that it helps to visualize just how feasible it is to make concrete float. Of course, real pontoons have to do more than just barely float themselves. They have to carry the weight of a deck and whatever crosses it with an acceptable margin of safety. That means they’re built much larger than a neutrally buoyant box. But mass isn’t the only issue. Concrete is a reliable material and if you’ve watched the channel for a while, you know that there are a few things you can count on concrete to do, and one of them is to crack. Usually not a big deal for a lot of structures, but that’s a pretty big problem if you’re trying to keep water out of a pontoon. Designers put enormous effort into preventing leaks. Modern pontoons are subdivided into sealed chambers. Watertight doors are installed between the chambers so they can still be accessed and inspected. Leak detection systems provide early warnings if anything goes wrong. And piping is pre-installed with pumps on standby, so if a leak develops, the chambers can be pumped dry before disaster strikes. The concrete recipe itself gets extra attention. Specialized mixes reduce shrinkage, improve water resistance, and resist abrasion. Even temperature control during curing matters. For the replacement of the Evergreen Point Bridge, contractors embedded heating pipes in the base slabs of the pontoons, allowing them to match the temperature of the walls as they were cast. This enabled the entire structure to cool down at a uniform rate, reducing thermal stresses that could lead to cracking. There were also errors during construction, though. A flaw in the post-tensioning system led to millions of dollars in change orders halfway through construction and delayed the project significantly while they worked out a repair. But there’s a good reason why they were so careful to get the designs right on that project. Of the four floating bridges in Washington state, two of them have sunk. In February 1979, a severe storm caused the western half of the Hood Canal Bridge to lose its buoyancy. Investigations revealed that open hatches allowed rain and waves to blow in, slowly filling the pontoons and ultimately leading to the western half of the bridge sinking. The DOT had to establish a temporary ferry service across the canal for nearly four years while the western span was rebuilt. Then, in 1990, it happened again. This time, the failure occurred during rehabilitation work on the Lacey V. Murrow Bridge while it was closed. Contractors were using hydrodemolition, high-pressure water jets, to remove old concrete from the road deck. Because the water was considered contaminated, it had to be stored rather than released into Lake Washington. Engineers calculated that the pontoon chambers could hold the runoff safely. To accommodate that, they removed the watertight doors that normally separated the internal compartments. But, when a storm hit over Thanksgiving weekend, water flooded into the open chambers. The bridge partially sank, severing cables on the adjacent Hadley Bridge and delaying the project by more than a year - a potent reminder that even small design or operational oversights can have major consequences on this type of structure. And we still have a lot to learn. Recently, Sound Transit began testing light rail trains on the Homer Hadley Bridge, introducing a whole new set of engineering puzzles. One is electricity. With power running through the rails, there was concern about stray currents damaging the bridge. To prevent this, the track is mounted on insulated blocks, with drip caps to prevent water from creating a conductive path. And then there’s the bridge movement. Unlike typical bridges, a floating bridge can roll, pitch, and yaw with weather, lake level, and traffic loads. The joints between the fixed shoreline and the bridge have to be able to accommodate movement. It’s usually not an issue for cars, trucks, bikes, or pedestrians, but trains require very precise track alignment. Engineers had to develop an innovative “track bridge” system. It uses specialized bearings to distribute every kind of movement over a longer distance, keeping tracks aligned even as the floating structure shifts beneath it. Testing in June went well, but there’s more to be done before you can ride the Link light rail across a floating highway. If floating bridges are the present, floating tunnels might be the future. I talked about immersed tube tunnels in a previous video. They’re used around the world, made by lowering precast sections to the seafloor and connecting them underwater. But what if, instead of resting on the bottom, those tunnels floated in the water column? It should be possible to suspend a tunnel with negative buoyancy using surface pontoons or even tether one with positive buoyancy to the bottom using anchors. In deep water, this could dramatically shorten tunnel lengths, reduce excavation costs, and minimize environmental impacts. Norway has actually proposed such a tunnel across a fjord on its western coast, a project that, if realized, would be the first of its kind. Like floating bridges before it, this tunnel will face a long list of unknowns. But that’s the essence of engineering: meeting each challenge with solutions tailored to a specific place and need. There aren’t many locations where floating infrastructure makes sense. The conditions have to be just right - calm waters, minimal ice, manageable tides. But where the conditions do allow, floating bridges and their hopefully future descendants open up new possibilities for connection, mobility, and engineering.