More from Blog - Practical Engineering
[Note that this article is a transcript of the video embedded above.] Foresthill Bridge soars across the valley of the North Fork of the American River just outside Auburn, California. At more than 700 feet or 200 meters above the canyon floor, it’s the fourth-tallest bridge in the United States. When it opened in 1973, crowds cheered for the impressive new structure. But if you take a closer look, it doesn’t really make any sense. This isn’t an interstate highway or even a major thoroughfare. The road sees only a few thousand vehicles a day, connecting Auburn, an exurb of Sacramento with a population just shy of 14,000, to scattered rural communities and recreation areas in the western foothills of the Sierra Nevadas. And while the American River does occasionally flood, it doesn’t flood 700 feet. Before this, the crossing was basically a low-water bridge. A structure of this magnitude just looks out of place. But it wasn’t just a boondoggle, at least not at the outset. It was built that way for a reason, and the story behind it is not only pretty wild, but it also sits at the hinge point of a major chapter in American infrastructure. I’m Grady, and this is Practical Engineering. California’s Central Valley is one of the world’s great agricultural regions: over 400 miles long, more than 50 miles wide, this remarkably fertile area is nearly half the size of England. The city of Sacramento sits near its center, right where the Sacramento and American Rivers meet. To manage and distribute water across this enormous landscape, the federal government launched the Central Valley Project in 1933, a sweeping effort by the U.S. Bureau of Reclamation to store water in the wetter northern part of the valley and distribute it to the drier south. In the process, the system would also generate hydropower and reduce flood risk for growing urban centers. I’m glossing over a lot here. The history of California is steeped in water issues, and even just the Central Valley Project is nearly a century of details. But, critically, Folsom Dam was one of the first big components of the plan. Built in 1955 on the American River, the concrete gravity dam provided significant flood protection to the City of Sacramento. However, it was constructed relatively early in our understanding of basin-scale hydrology and the uncertainty surrounding the frequency and magnitude of flooding over long periods of time. It became clear pretty quickly that Folsom Dam didn’t quite offer as much flood protection as was originally promised. Plus, because Folsom had to keep its flood pool empty to handle potential inflows, its ability to store water for irrigation or municipal supply purposes was somewhat limited. The answer to these problems, at least according to the federal government, was Auburn Dam, authorized by Congress in 1968. The new structure would sit upstream of Folsom and control the variable flows of the North and Middle Forks of the American River. It would be the tallest dam in California and one of the tallest in the country. And work began in earnest in the early 1970s. One of the first steps in the process was rerouting the American River. Crews built a large cofferdam and carved a diversion tunnel through the canyon wall. With the water redirected, they could begin drying out the bend in the river where the huge new dam would eventually sit. Once the site was dried out, crews began exploring the underlying geology more thoroughly. They drilled boreholes, excavated tunnels and shafts, and surveyed the rock that would serve as the dam’s foundation. The site’s geology turned out to be more complex than expected. Some zones of rock were more compressible than others, which could lead to dangerous stress concentrations in the dam. And, there were a lot of joints and fissures in the rock mass, making it more challenging to predict how they would behave under extreme loads, in addition to creating paths for water. So the next phase of the project was a major foundation treatment program starting in 1974. This mainly involved pressure grouting fractures to reinforce weak zones against the enormous weight of the structure and to make the geology more watertight, preventing seepage from flowing under the dam. With major construction works underway, anticipation for the reservoir was growing. Around the future rim, land values soared, and developers rushed to stake claims. Lakefront homes were planned. Entire communities emerged, built on the promise of a shining new shoreline. Then, in August 1975, a magnitude 5.9 earthquake struck near Oroville Dam, only about 50 miles or 80 kilometers away from the site. The quake only caused minor damage to structures in the area, but it rattled confidence in the Auburn project. The geology of the western Sierra Nevadas had long been considered stable. But the Oroville earthquake introduced a troubling possibility: that the loading and filling of large reservoirs could trigger seismic events in the area. This phenomenon, known as reservoir-induced seismicity, is still not well understood even to this day. The pressure of water infiltrating bedrock and the weight of a reservoir can change the balance of forces along faults, potentially triggering movement. You know, when Oroville is full, that’s roughly 10 billion pounds of force or 4 billion kilograms of mass. It’s a staggering amount. You can imagine how that might affect the underlying geology. The Auburn Dam, as a thin concrete arch, in contrast to the concrete gravity dam at Folsom or the earthfill embankment at Oroville, would be especially vulnerable to earthquakes. Thin-arch dams rely on the canyon walls to resist the thrust of the structure. In fact, I’ve made a video all about the topic you can check out after this! If one side shifts even a little during a quake, the results could be catastrophic. In April 1976, a report by the Association of Engineering Geologists concluded that an earthquake like the one at Oroville could cause the proposed Auburn Dam to catastrophically fail. It was back to the drawing board for the project, even as the foundation grouting program continued. And then the project was shaken again. That same year, the newly completed Teton Dam in Idaho collapsed during its first filling, killing 11 people and causing billions in damage. It had been built by the same agency, the Bureau of Reclamation. Concern continued to mount about the safety of Auburn Dam, which would have catastrophic consequences for the thousands of Californians downstream if it were to fail. It was all enough to bring Auburn’s momentum to a halt. While dam construction paused, one aspect of the project had already been finished: Foresthill Bridge. With a cofferdam on the river and the diversion tunnel only sized for smaller floods, there was a risk of overtopping the existing bridge, cutting off access between Auburn and the Sierra foothills. So, the Bureau of Reclamation decided to get a head start on a project that would eventually be inevitable: a new bridge, permanent and high enough to span the reservoir once it filled. If they were going to build a new bridge, they figured they might as well build it right the first time. The result was a striking steel cantilever bridge with two slender concrete piers soaring skyward from the canyon floor. [Actually, there was another bridge planned over the Middle Fork of the American River - the Ruck-a-Chucky Bridge. It was a wild idea: a curved cable-stayed bridge where all the cables are anchored in the hillsides rather than tall towers. But while that project was shelved, Foresthill made it all the way through design and construction.] At the time of its opening in 1973, it was the second-highest bridge in the United States. But as time went on, it became increasingly clear they had jumped the gun. By 1980, engineers floated two new dam designs that could withstand potential earthquakes. Both would be shifted slightly downstream from the original site. But by then, the tide of public and government support for the dam had turned. Construction costs had ballooned, and Auburn Dam was looking less feasible every day. As originally proposed, the structure would be even larger than the Hoover Dam size, but store less than 10% of Lake Mead’s volume. Meanwhile, upgrades to Folsom Dam and improved levees around Sacramento offered far cheaper ways to reduce the flood risk that was the major impetus for the dam in the first place. New hydrologic data also suggested that earlier flow estimates had been overly optimistic, reducing its value for conservation. The benefits of Auburn Dam were shrinking as the costs grew. It was turning into an incredibly expensive solution in search of a problem. At the same time, environmental and advocacy groups were gaining momentum. The project would flood canyons used for whitewater rafting and kayaking. It would drown ecosystems, inundate archaeological sites, and destroy long segments of the wild and scenic forks of the American River. It became clearer and clearer that the ends simply couldn’t justify the means. And yet, the idea never fully went away. In 1986, a massive flood hit the area. Water backed up at the diversion tunnel at Auburn, overtopped the cofferdam, and caused it to fail. Downstream levees were breached, and much of Sacramento flooded. For a moment, the momentum behind Auburn Dam and its promise of flood protection returned. But, it later became clear that the flood wasn’t entirely a natural disaster. The Bureau hadn’t followed the operating guidelines at Folsom Dam, worsening conditions downstream. And by then, grassroots opposition, cost concerns, and shifting priorities had all but put the Auburn Dam project to bed. Various proposals resurfaced over the years, including the idea of a “dry dam” that would only hold water during floods, but none gained much traction. With its many iterations and proposals, the project became known as the dam that wouldn’t die. But in 2008, the state of California revoked the Bureau’s water rights permit for the project, maybe not sealing its fate completely, but at least burying it several feet deeper. This story really gets to the heart of the challenge with large-scale public works projects. No matter how you configure them, there are big losers and big winners. There’s no doubt that a dam across the American River upstream of Folsom could provide significant benefits to the public: flood control, water supply, hydropower, recreational opportunities, or some combination of them all. But those benefits have to be weighed against real costs: environmental damage, staggering capital investment, long-term maintenance, the inherent risk of catastrophic failure, and the social toll of displacement and disruption. The mid-20th century was the heyday of American dam building, an era driven by ambition and optimism, but also by uncertainty. We didn’t have enough historical data to fully understand river systems. We couldn’t yet grasp the long-term consequences of altering them. And we couldn’t see into the future to know what the true impacts of these structures would be or what the cost of keeping them in good shape might amount to. Since then, we have a lot more experience with huge multi-purpose reservoirs. And it seems, in general, that the more we learn, the more the answer to whether they’re worth it seems to be: maybe not. And that maybe turns into a probably when you consider that all the best sites are already taken. New Melones Dam, completed by the Bureau of Reclamation in 1979, not too far from Auburn, faced a lot of similar controversy and pushback. Although the project was eventually completed, the fight was bitter, and its legacy so far is mixed. The project is widely considered to be the last great American dam. At least, great in size, if not in public sentiment. No other reservoir of that scale has been built in the U.S. since. And with the Auburn Dam project mostly dead, it seems doubtful there ever will be. The American River continued flowing through the diversion tunnel until 2007, when a new pump station and restoration project returned the river to its original channel. Kayakers can now navigate downstream, and even have some new features at the pump station to choose from: the artificial rapids on the left or the screen channel on the right. After more than three decades, the river was back in its place, tying a bow on a dam that was never built. And yet, just a few miles upstream, the Foresthill Bridge still stands, dramatic, overbuilt, and strangely out of sync with its surroundings. And we’re still kind of stuck taking care of this bridge, whose scale is so out of proportion with its purpose. In the 2010s, the bridge underwent a major seismic retrofit to improve its safety and make future inspections easier. Most recently, it was part of a nationwide program inspecting bridges built with T-1 steel, an alloy that, in some cases, has shown concerning cracking at welds. The I-40 bridge crack in Memphis, which I covered in an earlier video, triggered the effort. And there have been quite a few defects found in bridges since then, so here’s hoping that Foresthill doesn’t make the list. It’s a cool structure in its own right. But it stands for more than just an engineering achievement. Auburn Dam left a lot of scars, both on the physical landscape and the political one. But it also left this bridge that became more than just an out-of-place oddity. In a sense, it’s become a monument to the end of an era in US major public works projects, and, hopefully, a tribute to the caution and care that will shape the next one.
[Note that this article is a transcript of the video embedded above.] Flaming Gorge Dam rises from the Green River in northern Utah like a concrete wedge driven into the canyon, anchored against the sheer rock walls that flank it. It’s quintessential, in a way. It’s what we picture when we think about dams: a hulking, but also somehow graceful, wall of concrete stretching across a narrow rocky valley. But to dam engineers, there’s nothing quintessential about it. So-called arch dams are actually pretty rare. For reference, the US has about 92,000 dams listed in the national inventory. I couldn’t find an exact number, but based on a little bit of research, I estimate that we have maybe around 50 arch dams - it’s less than a tenth of a percent. The only reason we think of arch dams as archetypal is because they’re so huge. I counted 11 in the US that have their own visitor center. There just aren’t that many works of infrastructure that double as tourist destinations, and the reason for it is, I think, kind of interesting. Because an arch dam isn’t just an engineering solution to holding back water, and it’s not just a solution to holding back a lot of water. It’s all about height, and I built a little demo to show you what I mean. I’m Grady, and this is Practical Engineering. Engineers love categories, and dams are no exception. You can group them in a lot of ways, but mostly, we care about how they handle the incredible force of water they hold back. Embankment dams do it with earth or rock, relying on friction between the individual particles that make up the structure. Gravity dams do it with weight. Let me show you an example. I have my tried and trusted acrylic flume with a small plastic dam. Once this is all set up, I can start filling up the reservoir. This little dam is a little narrower than the flume. It doesn’t touch the sides, so it leaks a bit. The reason for that will be clear in a moment. And hopefully you can see what’s about to happen. This gravity dam doesn’t have much gravity in it, so it doesn’t take much water at all before you get a failure. I’m counting failure as the first sign of movement, by the way. That’s when the stabilizing forces are overcome by the destabilizing ones. And the little dam by itself could hold until my reservoir was about a quarter of the way to the top. Gravity dams get their stability against sliding from… you guessed it… friction. Bet you thought I was going to say gravity. And actually, it kind of is gravity, since frictional resistance is a function of just two variables: the normal force (in other words, the weight of the structure) and a coefficient that depends on the two materials touching. Engineers analyze the stability of gravity dams in cross-section, essentially taking a small slice of the structure. You want every slice to be able to support itself. That’s why I didn’t want the demo touching the sides of the flume; it would add resistance that doesn’t actually exist in a cross-section. The destabilizing force is hydrostatic pressure from the reservoir, which increases with depth. And the stabilizing force is friction. There are some complexities to this that we’ll get into, but very generally, as long as you have more friction than pressure, you’re good; you have a stable structure. So let’s add some normal force to the demo and see what happens. [Beat] You can see my little reservoir gets a little higher before the dam fails, about halfway to the top. And we can try it again with more weight. But the result gets a little more interesting… the dam didn’t actually slide this time, but it still failed. Turns out gravity dams have two major failure modes: sliding and overturning. Resistance to sliding comes from friction, which really doesn’t depend on how the weight of the dam is distributed. That’s not true for overturning failures. Let’s look back at our cross-section. For a unit width of dam, the hydrostatic pressure from the reservoir looks like this. Pressure increases with depth. And the area under this line is the total force pushing the dam downstream. We can simplify that distribution and treat it like it’s a single force, and it turns out when you do that, the force acts a third of the way up the total depth of water. Most dams want to rotate about the downstream toe, so you have a destabilizing force offset from the point of rotation. In other words, you have a torque, also called a moment. The dam has to create an opposite moment around that point to remain stable. Moment or torque is calculated as the force multiplied by its perpendicular distance from the point of rotation. So, the further the center of mass is from the downstream toe, the more stable the structure is, and the demo shows it too. Here’s where we left the weights the last time, and let’s see it happen again. The reservoir makes it about two-thirds of the way up the walls before the dam overturns. Let’s make a simple shift. Just move the weights further upstream and try again. It’s not a big difference. The reservoir reaches about three-quarters the way up before we see a sliding failure, but shifting the weights did increase the stability. And this is why a lot of gravity dams have a fairly consistent shape, with most of the weight concentrated on the upstream side, and usually a sloped or stepped downstream face. Interestingly, you can use the force of water against itself in a way. Watch what happens when I turn my little model around. Now the hydrostatic pressure applies both a destabilizing and stabilizing force, so you get more resistance for a given depth. A lot of deployable temporary storm barriers and cofferdam systems take advantage of this kind of configuration. You can imagine if I extended the base even further, I could create a structure that was self-stable just from its geometry alone. The weight of the water on the footing would overcome the lateral pressure. But there’s a catch to this. This is fully stable now, but watch what happens when I give the dam just a bit of a tilt. All of a sudden, it’s no longer stable. This might seem kind of intuitive, but I think it’s important to explain what’s actually going on. Hydrostatic pressure from the reservoir doesn’t only act on the face of a dam. With smooth plastic on smooth plastic, you get a pretty nice seal, but as soon as even a tiny gap opens, water gets underneath. Now there’s upward pressure on the bottom of the dam as well. If you’re depending on the downward force of a dam from its weight for stability, it’s easy to see why an upward force is a bad thing. And it’s so dramatic in the example with the upstream footing specifically. In that case, the downward pressure of the reservoir is acting as a stabilizing force, but if water can get underneath that footing, it basically cancels out. The pressure on the bottom is the same as the pressure on the top. But this isn’t only an issue in that case. The ground isn’t waterproof. In fact, I’ve done a video all about the topic. Soil and rock works more like a sponge than a solid material, and water can flow through them. That’s how we get aquifers and wells and springs and such. But it’s a problem for gravity dams, because water can seep below the structure and apply pressure to the bottom, essentially counteracting its weight. We call it uplift. Looking back at the cross-section, we can estimate this. Of course, you have the triangular pressure distribution along the upstream face. But at this point you have the full hydrostatic pressure also pushing upward. And at the downstream toe, you have no pressure (it’s exposed to the atmosphere). So, now you have a pressure distribution below the dam that looks like this. Of course, this part can get a lot more complicated since most dams don’t sit flush with the ground, and many are equipped with drains and cutoff walls, so definitely go check that other video out if you want to learn more. But let me show you the issue this causes with some recreational math on our cross-sectional slice of the dam. The taller the dam, the greater the uplift force. That happens linearly. In other words, the force is proportional to the depth of the reservoir. But look at the lateral force. Again, remember it’s the area under this triangle. Maybe you remember that formula: one-half times base times height. Well, the height is the depth of the water. And the base is also a function of the depth. More specifically, it’s the unit weight of water times depth. Multiply it together, and you see the challenge: the force increases as a function of the depth squared. So for every unit of additional height you want out of a gravity dam, you need significantly more weight to resist the forces, which means more material and thus a lot more cost. Hopefully all this exposition is starting to reveal a solution to this rapid divergence of stability and loads as a reservoir increases in height. Dams don’t actually float in space like my demonstration and graphics show. You know, by necessity, they extend across the entire valley and usually key into the abutments on either side. Naturally, that connection at the sides is going to offer some resistance to the forces dams need to withstand. And if you can count on that resistance, you can significantly lower the mass, and thus the cost, of the structure. But, again, this gets complicated. Let’s go back to the demo. Now I’m going to replace my gravity dam with something much simpler. Just a sheet of aluminum flashing, and, to simulate that resistance provided by socketing the structure into the earth, I’ve taped it to the bottom and sides… with some difficulty, actually. When I fill up the reservoir with water, it holds just fine. There’s a little leaking past my subpar tape job, but this is a fully stable structure. And I think the comparison here is pretty stark. When you can develop resistance from the sides you can get away with a lot less dam. But it’s harder than you might think to do that. For one, the natural soil or rock at a dam site might not be all that strong. The banks of rivers aren’t generally known for their stability, so the prospect of transferring enormous amounts of force into them rarely makes a lot of engineering sense. But the other challenge is in the dam itself. Take a look back at this demo. See how my dam is bending behind the force of the water. It’s holding there, but, you know, we don’t actually build dams out of aluminum flashing. Resisting loads in this way basically treats the dam like a beam, like a sideways bridge girder. Except, unlike girder bridges that usually only span up to a few hundred feet, dams are often much longer. Even the stiffest modern materials, like prestressed concrete boxes, would just deflect too much under load to transfer all the hydrostatic pressure across a valley into the abutments. Plus we usually don’t like to rely on steel too much in dams because of issues with corrosion and longevity. So where a typical beam experiences both tensile and compressive stress on opposite sides, we really need to transfer all that load, creating only compressive stress in the material. I’m sure you see where I’m going with this. How have we been building bridges for ages from materials like masonry where tensile stress isn’t an option? It’s arches! The arch is a special shape in engineering because you can transfer loads by putting the material in compression only, allowing for simpler, cheaper, and longer-lasting materials like masonry and concrete. You basically co-opt the geology for support, reducing the need for a massive structure. For completeness’s sake, let me show you how it works in the demo. I’ve formed a little arch from my thin sheet of aluminum. Now when I fill up the reservoir, there’s no deflection like the previous example. And again, side by side, it’s easy to see the benefits here. You get a lot more efficiency out of your materials than you do with an earthen embankment dam or a gravity structure. Of course, there are some drawbacks here. For one, arches create horizontal forces at the supports called thrusts that have to be resisted. Sites that use this design really require strong, competent rock in the abutments to withstand the enormous loads. And just like with bridges, the span matters. The wider the valley, the bigger the arch needs to be, so these dams generally only make sense in deep gorges and steep, narrow canyons. The engineering is a lot more complicated, too. You can’t use a simple 2D cross-section to demonstrate stability. The structural behavior is inherently three-dimensional, which is tougher to characterize, especially when you consider unusual conditions like earthquakes and temperature effects. And since they’re lighter, arch dams don’t resist uplift forces very well, making foundation drainage systems more critical. All this means that it’s really only a solution that makes economic sense in a narrow range of circumstances, one of the most important being height. For smaller dams, the additional complexity and expense of designing and building an arch aren’t justified by the structural efficiency. Gravity and embankment dams are much more adaptable to a wider range of site conditions. And there are other types of dams, too, that blend these ideas. Multiple-arch dams use a series of smaller arches supported by buttresses, dividing the span into more manageable components. Even what is perhaps the most famous arch dam in the world - Hoover Dam - isn’t a pure arch structure. Technically, it’s a gravity-arch dam, meaning it resists part of the water load through mass while also distributing the forces into the canyon through arch action. The proportions are carefully balanced to take advantage of the unique site conditions and relatively wider canyon than most arch dams are built in. And so, when you look at the tallest dams on Earth, one structural form dominates. By my estimation, around 40 percent of the tallest 200 dams in the world incorporate an arch into their design. There aren’t that many places where it makes sense, but when you compare what it takes to hold a reservoir back in a narrow canyon valley, I think the case for arches is pretty clear.
[Note that this article is a transcript of the video embedded above.] In the early 1900s, Seattle was a growing city hemmed in by geography. To the west was Puget Sound, a vital link to the Pacific Ocean. To the east, Lake Washington stood between the city and the farmland and logging towns of the Cascades. As the population grew, pressure mounted for a reliable east–west transportation route. But Lake Washington wasn’t easy to cross. Carved by glaciers, the lake is deceptively deep, over 200 feet or 60 meters in some places. And under that deep water sits an even deeper problem: a hundred-foot layer of soft clay and mud. Building bridge piers all the way to solid ground would have required staggeringly sized supports. The cost and complexity made it infeasible to even consider. But in 1921, an engineer named Homer Hadley proposed something radical: a bridge that didn’t rest on the bottom at all. Instead, it would float on massive hollow concrete pontoons, riding on the surface like a ship. It took nearly two decades for his idea to gain traction, but with the New Deal and Public Works Administration, new possibilities for transportation routes across the country began to open up. Federal funds flowed, and construction finally began on what would become the Lacey V. Murrow Bridge. When it opened in 1940, it was the first floating concrete highway of its kind, a marvel of engineering and a symbol of ingenuity under constraint. But floating bridges, by their nature, carry some unique vulnerabilities. And fifty years later, this span would be swallowed by the very lake it crossed. Between that time and since, the Seattle area has kind of become the floating concrete highway capital of the world. That’s not an official designation, at least not yet, but there aren’t that many of these structures around the globe. And four of the five longest ones on Earth are clustered in one small area of Washington state. You have Hood Canal, Evergreen Point, Lacey V Murrow, and its neighbor, the Homer M. Hadley Memorial Bridge, named for the engineer who floated the idea in the first place. Washington has had some high-profile failures, but also some remarkable successes, including a test for light rail transit over a floating bridge just last month in June 2025. It's a niche branch of engineering, full of creative solutions and unexpected stories. So I want to take you on a little tour of the hidden engineering behind them. I’m Grady, and this is Practical Engineering. Floating bridges are basically as old as recorded history. It’s not a complicated idea: place pontoons across a body of water, then span them with a deck. For thousands of years, this straightforward solution has provided a fast and efficient way to cross rivers and lakes, particularly in cases where permanent bridges were impractical or when the need for a crossing was urgent. In fact, floating bridges have been most widely used in military applications, going all the way back to Xerxes crossing the Dardanelles in 480 BCE. They can be made portable, quick to erect, flexible to a wide variety of situations, and they generally don’t require a lot of heavy equipment. There are countless designs that have been used worldwide in various military engagements. But most floating bridges, both ancient and modern, weren’t meant to last. They’re quick to put up, but also quick to take out, either on purpose or by Mother Nature. They provide the means to get in, get across, and get out. So they aren’t usually designed for extreme conditions. Transitioning from temporary military crossings to permanent infrastructure was a massive leap, and it brought with it a host of engineering challenges. An obvious one is navigation. A bridge that floats on the surface of the water is, by default, a barrier to boats. So, permanent floating bridges need to make room for maritime traffic. Designers have solved this in several ways, and Washington State offers a few good case studies. The Evergreen Point Floating Bridge includes elevated approach spans on either end, allowing ships to pass beneath before the road descends to water level. The original Lacey V. Murrow Bridge took a different approach. Near its center, a retractable span could be pulled into a pocket formed by adjacent pontoons, opening a navigable channel. But, not only did the movable span create interruptions to vehicle traffic on this busy highway, it also created awkward roadway curves that caused frequent accidents. The mechanism was eventually removed after the East Channel Bridge was replaced to increase its vertical clearance, providing boats with an alternative route between the two sides of Lake Washington. Further west, the Hood Canal Bridge incorporates truss spans for smaller craft. And it has hydraulic lift sections for larger ships. The US Naval Base Kitsap is not far away, so sometimes the bridge even has to open for Navy submarines. These movable spans can raise vertically above the pontoons, while adjacent bridge segments slide back underneath. The system is flexible: one side can be opened for tall but narrow vessels, or both for wider ships. But floating bridges don’t just have to make room for boats. In a sense, they are boats. Many historical spans literally floated on boats lashed together. And that comes with its own complications. Unlike fixed structures, floating bridges are constantly interacting with water: waves, currents, and sometimes even tides and ice. They’re easiest to implement on calm lakes or rivers with minimal flooding, but water is water, and it’s a totally different type of engineering when you’re not counting on firm ground to keep things in place. We don’t just stretch floating bridges across the banks and hope for the best. They’re actually moored in place, usually by long cables and anchors, to keep materials from overstressing and to prevent movements that would make the roadway uncomfortable or dangerous. Some anchors use massive concrete slabs placed on the lakebed. Others are tied to piles driven deep into the ground. In particularly deep water or soft soil, anchors are lowered to the bottom with water hoses that jet soil away, allowing the anchor to sink deep into the mud. These anchoring systems do double duty, providing both structural integrity and day-to-day safety for drivers, but even with them, floating bridges have some unique challenges. They naturally sit low to the water, which means that in high winds, waves can crash directly onto the roadway, obscuring the visibility and creating serious risks to road users. Motion from waves and wind can also cause the bridge to flex and shift beneath vehicles, especially unnerving for drivers unused to the sensation. In Washington State, all the major floating bridges have been closed at various times due to weather. The DOT enforces wind thresholds for each bridge; if the wind exceeds the threshold, the bridge is closed to traffic. Even if the bridge is structurally sound, these closures reflect the reality that in extreme weather, the bridge itself becomes part of the storm. But we still haven’t addressed the floating elephant in the pool here: the concrete pontoons themselves. Floating bridges have traditionally been made of wood or inflatable rubber, which makes sense if you’re trying to stay light and portable. But permanent infrastructure demands something more durable. It might seem counterintuitive to build a buoyant structure out of concrete, but it’s not as crazy as it sounds. In fact, civil engineering students compete every year in concrete canoe races hosted by the American Society of Civil Engineers. Actually, I was doing a little recreational math to find a way to make this intuitive, and I stumbled upon a fun little fact. If you want to build a neutrally buoyant, hollow concrete cube, there’s a neat rule of thumb you can use. Just take the wall thickness in inches, and that’s your outer dimension in feet. Want 12-inch-thick concrete walls? You’ll need a roughly 12-foot cube. This is only fun because of the imperial system, obviously. It’s less exciting to say that the two dimensions have a roughly linear relationship with a factor of 12. And I guess it’s not really that useful except that it helps to visualize just how feasible it is to make concrete float. Of course, real pontoons have to do more than just barely float themselves. They have to carry the weight of a deck and whatever crosses it with an acceptable margin of safety. That means they’re built much larger than a neutrally buoyant box. But mass isn’t the only issue. Concrete is a reliable material and if you’ve watched the channel for a while, you know that there are a few things you can count on concrete to do, and one of them is to crack. Usually not a big deal for a lot of structures, but that’s a pretty big problem if you’re trying to keep water out of a pontoon. Designers put enormous effort into preventing leaks. Modern pontoons are subdivided into sealed chambers. Watertight doors are installed between the chambers so they can still be accessed and inspected. Leak detection systems provide early warnings if anything goes wrong. And piping is pre-installed with pumps on standby, so if a leak develops, the chambers can be pumped dry before disaster strikes. The concrete recipe itself gets extra attention. Specialized mixes reduce shrinkage, improve water resistance, and resist abrasion. Even temperature control during curing matters. For the replacement of the Evergreen Point Bridge, contractors embedded heating pipes in the base slabs of the pontoons, allowing them to match the temperature of the walls as they were cast. This enabled the entire structure to cool down at a uniform rate, reducing thermal stresses that could lead to cracking. There were also errors during construction, though. A flaw in the post-tensioning system led to millions of dollars in change orders halfway through construction and delayed the project significantly while they worked out a repair. But there’s a good reason why they were so careful to get the designs right on that project. Of the four floating bridges in Washington state, two of them have sunk. In February 1979, a severe storm caused the western half of the Hood Canal Bridge to lose its buoyancy. Investigations revealed that open hatches allowed rain and waves to blow in, slowly filling the pontoons and ultimately leading to the western half of the bridge sinking. The DOT had to establish a temporary ferry service across the canal for nearly four years while the western span was rebuilt. Then, in 1990, it happened again. This time, the failure occurred during rehabilitation work on the Lacey V. Murrow Bridge while it was closed. Contractors were using hydrodemolition, high-pressure water jets, to remove old concrete from the road deck. Because the water was considered contaminated, it had to be stored rather than released into Lake Washington. Engineers calculated that the pontoon chambers could hold the runoff safely. To accommodate that, they removed the watertight doors that normally separated the internal compartments. But, when a storm hit over Thanksgiving weekend, water flooded into the open chambers. The bridge partially sank, severing cables on the adjacent Hadley Bridge and delaying the project by more than a year - a potent reminder that even small design or operational oversights can have major consequences on this type of structure. And we still have a lot to learn. Recently, Sound Transit began testing light rail trains on the Homer Hadley Bridge, introducing a whole new set of engineering puzzles. One is electricity. With power running through the rails, there was concern about stray currents damaging the bridge. To prevent this, the track is mounted on insulated blocks, with drip caps to prevent water from creating a conductive path. And then there’s the bridge movement. Unlike typical bridges, a floating bridge can roll, pitch, and yaw with weather, lake level, and traffic loads. The joints between the fixed shoreline and the bridge have to be able to accommodate movement. It’s usually not an issue for cars, trucks, bikes, or pedestrians, but trains require very precise track alignment. Engineers had to develop an innovative “track bridge” system. It uses specialized bearings to distribute every kind of movement over a longer distance, keeping tracks aligned even as the floating structure shifts beneath it. Testing in June went well, but there’s more to be done before you can ride the Link light rail across a floating highway. If floating bridges are the present, floating tunnels might be the future. I talked about immersed tube tunnels in a previous video. They’re used around the world, made by lowering precast sections to the seafloor and connecting them underwater. But what if, instead of resting on the bottom, those tunnels floated in the water column? It should be possible to suspend a tunnel with negative buoyancy using surface pontoons or even tether one with positive buoyancy to the bottom using anchors. In deep water, this could dramatically shorten tunnel lengths, reduce excavation costs, and minimize environmental impacts. Norway has actually proposed such a tunnel across a fjord on its western coast, a project that, if realized, would be the first of its kind. Like floating bridges before it, this tunnel will face a long list of unknowns. But that’s the essence of engineering: meeting each challenge with solutions tailored to a specific place and need. There aren’t many locations where floating infrastructure makes sense. The conditions have to be just right - calm waters, minimal ice, manageable tides. But where the conditions do allow, floating bridges and their hopefully future descendants open up new possibilities for connection, mobility, and engineering.
[Note that this article is a transcript of the video embedded above.] Wichita Falls, Texas, went through the worst drought in its history in 2011 and 2012. For two years in a row, the area saw its average annual rainfall roughly cut in half, decimating the levels in the three reservoirs used for the city’s water supply. Looking ahead, the city realized that if the hot, dry weather continued, they would be completely out of water by 2015. Three years sounds like a long runway, but when it comes to major public infrastructure projects, it might as well be overnight. Between permitting, funding, design, and construction, three years barely gets you to the starting line. So the city started looking for other options. And they realized there was one source of water nearby that was just being wasted - millions of gallons per day just being flushed down the Wichita River. I’m sure you can guess where I’m going with this. It was the effluent from their sewage treatment plant. The city asked the state regulators if they could try something that had never been done before at such a scale: take the discharge pipe from the wastewater treatment plant and run it directly into the purification plant that produces most of the city’s drinking water. And the state said no. So they did some more research and testing and asked again. By then, the situation had become an emergency. This time, the state said yes. And what happened next would completely change the way cities think about water. I’m Grady and this is Practical Engineering. You know what they say, wastewater happens. It wasn’t that long ago that raw sewage was simply routed into rivers, streams, or the ocean to be carried away. Thankfully, environmental regulations put a stop to that, or at least significantly curbed the amount of wastewater being set loose without treatment. Wastewater plants across the world do a pretty good job of removing pollutants these days. In fact, I have a series of videos that go through some of the major processes if you want to dive deeper after this. In most places, the permits that allow these plants to discharge set strict limits on contaminants like organics, suspended solids, nutrients, and bacteria. And in most cases, they’re individualized. The permit limits are based on where the effluent will go, how that water body is used, and how well it can tolerate added nutrients or pollutants. And here’s where you start to see the issue with reusing that water: “clean enough” is a sliding scale. Depending on how water is going to be used or what or who it’s going to interact with, our standards for cleanliness vary. If you have a dog, you probably know this. They should drink clean water, but a few sips of a mud puddle in a dirty street, and they’re usually just fine. For you, that might be a trip to the hospital. Natural systems can tolerate a pretty wide range of water quality, but when it comes to drinking water for humans, it should be VERY clean. So the easiest way to recycle treated wastewater is to use it in ways that don’t involve people. That idea’s been around for a while. A lot of wastewater treatment plants apply effluent to land as a disposal method, avoiding the need for discharge to a natural water body. Water soaks into the ground, kind of like a giant septic system. But that comes with some challenges. It only works if you’ve got a lot of land with no public access, and a way to keep the spray from drifting into neighboring properties. Easy at a small scale, but for larger plants, it just isn’t practical engineering. Plus, the only benefits a utility gets from the effluent are some groundwater recharge and maybe a few hay harvests per season. So, why not send the effluent to someone else who can actually put it to beneficial use? If only it were that simple. As soon as a utility starts supplying water to someone else, things get complicated because you lose a lot of control over how the effluent is used. Once it's out of your hands, so to speak, it’s a lot harder to make sure it doesn’t end up somewhere it shouldn’t, like someone’s mouth. So, naturally, the permitting requirements become stricter. Treatment processes get more complicated and expensive. You need regular monitoring, sampling, and laboratory testing. In many places in the world, reclaimed water runs in purple pipes so that someone doesn’t inadvertently connect to the lines thinking they’re potable water. In many cases, you need an agreement in place with the end user, making sure they’re putting up signs, fences, and other means of keeping people from drinking the water. And then you need to plan for emergencies - what to do if a pipe breaks, if the effluent quality falls below the standards, or if a cross-connection is made accidentally. It’s a lot of work - time, effort, and cost - to do it safely and follow the rules. And those costs have to be weighed against the savings that reusing water creates. In places that get a lot of rain or snow, it’s usually not worth it. But in many US states, particularly those in the southwest, this is a major strategy to reduce the demand on fresh water supplies. Think about all the things we use water for where its cleanliness isn’t that important. Irrigation is a big one - crops, pastures, parks, highway landscaping, cemeteries - but that’s not all. Power plants use huge amounts of water for cooling. Street sweeping, dust control. In nearly the entire developed world, we use drinking-quality water to flush toilets! You can see where there might be cases where it makes good sense to reclaim wastewater, and despite all the extra challenges, its use is fairly widespread. One of the first plants was built in 1926 at Grand Canyon Village which supplied reclaimed water to a power plant and for use in steam locomotives. Today, these systems can be massive, with miles and miles of purple pipes run entirely separate from the freshwater piping. I’ve talked about this a bit on the channel before. I used to live near a pair of water towers in San Antonio that were at two different heights above ground. That just didn’t make any sense until I realized they weren’t connected; one of them was for the reclaimed water system that didn’t need as much pressure in the lines. Places like Phoenix, Austin, San Antonio, Orange County, Irvine, and Tampa all have major water reclamation programs. And it’s not just a US thing. Abu Dhabi, Beijing, and Tel Aviv all have infrastructure to make beneficial use of treated municipal wastewater, just to name a few. Because of the extra treatment and requirements, many places put reclaimed water in categories based on how it gets used. The higher the risk of human contact, the tighter the pollutant limits get. For example, if a utility is just selling effluent to farmers, ranchers, or for use in construction, exposure to the public is minimal. Disinfecting the effluent with UV or chlorine may be enough to meet requirements. And often that’s something that can be added pretty simply to an existing plant. But many reclaimed water users are things like golf courses, schoolyards, sports fields, and industrial cooling towers, where people are more likely to be exposed. In those cases, you often need a sewage plant specifically designed for the purpose or at least major upgrades to include what the pros call tertiary treatment processes - ways to target pollutants we usually don’t worry about and improve the removal rates of the ones we do. These can include filters to remove suspended solids, chemicals that bind to nutrients, and stronger disinfection to more effectively kill pathogens. This creates a conundrum, though. In many cases, we treat wastewater effluent to higher standards than we normally would in order to reclaim it, but only for nonpotable uses, with strict regulations about human contact. But if it’s not being reclaimed, the quality standards are lower, and we send it downstream. If you know how rivers work, you probably see the inconsistency here. Because in many places, down the river, is the next city with its water purification plant whose intakes, in effect, reclaim that treated sewage from the people upstream. This isn’t theoretical - it’s just the reality of how humans interact with the water cycle. We’ve struggled with the problems it causes for ages. In 1906, Missouri sued Illinois in the Supreme Court when Chicago reversed their river, redirecting its water (and all the city’s sewage) toward the Mississippi River. If you live in Houston, I hate to break it to you, but a big portion of your drinking water comes from the flushes and showers in Dallas. There have been times when wastewater effluent makes up half of the flow in the Trinity River. But the question is: if they can do it, why can’t we? If our wastewater effluent is already being reused by the city downstream to purify into drinking water, why can’t we just keep the effluent for ourselves and do the same thing? And the answer again is complicated. It starts with what’s called an environmental buffer. Natural systems offer time to detect failures, dilute contaminants, and even clean the water a bit—sunlight disinfects, bacteria consume organic matter. That’s the big difference in one city, in effect, reclaiming water from another upstream. There’s nature in between. So a lot of water reclamation systems, called indirect potable reuse, do the same thing: you discharge the effluent into a river, lake, or aquifer, then pull it out again later for purification into drinking water. By then, it’s been diluted and treated somewhat by the natural systems. Direct potable reuse projects skip the buffer and pipe straight from one treatment plant to the next. There’s no margin for error provided by the environmental buffer. So, you have to engineer those same protections into the system: real-time monitoring, alarms, automatic shutdowns, and redundant treatment processes. Then there’s the issue of contaminants of emerging concern: pharmaceuticals, PFAS [P-FAS], personal care products - things that pass through people or households and end up in wastewater in tiny amounts. Individually, they’re in parts per billion or trillion. But when you close the loop and reuse water over and over, those trace compounds can accumulate. Many of these aren’t regulated because they’ve never reached concentrations high enough to cause concern, or there just isn’t enough knowledge about their effects yet. That’s slowly changing, and it presents a big challenge for reuse projects. They can be dealt with at the source by regulating consumer products, encouraging proper disposal of pharmaceuticals (instead of flushing them), and imposing pretreatment requirements for industries. It can also happen at the treatment plant with advanced technologies like reverse osmosis, activated carbon, advanced oxidation, and bio-reactors that break down micro-contaminants. Either way, it adds cost and complexity to a reuse program. But really, the biggest problem with wastewater reuse isn’t technical - it’s psychological. The so-called “yuck factor” is real. People don’t want to drink sewage. Indirect reuse projects have a big benefit here. With some nature in between, it’s not just treated wastewater; it’s a natural source of water with treated wastewater in it. It’s kind of a story we tell ourselves, but we lose the benefit of that with direct reuse: Knowing your water came from a toilet—even if it’s been purified beyond drinking water standards—makes people uneasy. You might not think about it, but turning the tap on, putting that water in a glass, and taking a drink is an enormous act of trust. Most of us don’t understand water treatment and how it happens at a city scale. So that trust that it’s safe to drink largely comes from seeing other people do it and past experience of doing it over and over and not getting sick. The issue is that, when you add one bit of knowledge to that relative void of understanding - this water came directly from sewage - it throws that trust off balance. It forces you not to rely not on past experience but on the people and processes in place, most of which you don’t understand deeply, and generally none of which you can actually see. It’s not as simple as just revulsion. It shakes up your entire belief system. And there’s no engineering fix for that. Especially for direct potable reuse, public trust is critical. So on top of the infrastructure, these programs also involve major public awareness campaigns. Utilities have to put themselves out there, gather feedback, respond to questions, be empathetic to a community’s values, and try to help people understand how we ensure water quality, no matter what the source is. But also, like I said, a lot of that trust comes from past experience. Not everyone can be an environmental engineer or licensed treatment plant operator. And let’s be honest - utilities can’t reach everyone. How many public meetings about water treatment have you ever attended? So, in many places, that trust is just going to have to be built by doing it right, doing it well, and doing it for a long time. But, someone has to be first. In the U.S., at least on the city scale, that drinking water guinea pig was Wichita Falls. They launched a massive outreach campaign, invited experts for tours, and worked to build public support. But at the end of the day, they didn’t really have a choice. The drought really was that severe. They spent nearly four years under intense water restrictions. Usage dropped to a third of normal demand, but it still wasn’t enough. So, in collaboration with state regulators, they designed an emergency direct potable reuse system. They literally helped write the rules as they went, since no one had ever done it before. After two months of testing and verification, they turned on the system in July 2014. It made national headlines. The project ran for exactly one year. Then, in 2015, a massive flood ended the drought and filled the reservoirs in just three weeks. The emergency system was always meant to be temporary. Water essentially went through three treatment plants: the wastewater plant, a reverse osmosis plant, and then the regular water purification plant. That’s a lot of treatment, which is a lot of expense, but they needed to have the failsafe and redundancy to get the state on board with the project. The pipe connecting the two plants was above ground and later repurposed for the city’s indirect potable reuse system, which is still in use today. In the end, they reclaimed nearly two billion gallons of wastewater as drinking water. And they did it with 100% compliance with the standards. But more importantly, they showed that it could be done, essentially unlocking a new branch on the skill tree of engineering that other cities can emulate and build on.
More in science
How Canada's largest city developed a 30 kilometer network of pedestrian tunnels
Bison have made a remarkable comeback in Yellowstone National Park, going from fewer than two dozen animals at the turn of the last century to roughly 5,000 today. Their return, a study finds, has had a remarkable impact on grasslands in the region. Read more on E360 →
Belting your favorite song over prerecorded music into a microphone in front of friends and strangers at karaoke is a popular way for people around the world to destress after work or celebrate a friend’s birthday. The idea for the karaoke machine didn’t come from a singer or a large entertainment company but from Nichiden Kogyo, a small electronics assembly company in Tokyo. The company’s founder, Shigeichi Negishi, was singing to himself at work one day in 1967 when an employee jokingly told him he was out of tune. Figuring that singing along to music would help him stay on pitch, Negishi began thinking about how to make that possible. He had the idea to turn one of the 8-track tape decks his company manufactured into what is now known as the karaoke machine. Later that year, he built what would become the first such machine, which he called the Music Box. The 30-centimeter cube housed an 8-track player for four tapes of instrumental recordings and included a microphone to sing into. He sold his machine in 1967 to a Japanese trading company, which then sold it to restaurants, bars, and hotel banquet halls, where they used it as entertainment. The machine was coined karaoke in the 1970s to describe the act of singing along to prerecorded music. The term is a combination of two Japanese words: kara, meaning empty, and okesutora, meaning orchestra. In a few years, dedicated establishments known as karaoke bars began to open across Japan. Today the country has more than 8,000, according to Statista. The karaoke machine has been commemorated as an IEEE Milestone. The dedication ceremony was held in June in the area that houses karaoke booths connected to the Shinagawa Prince Hotel in Tokyo. Negishi’s family attended the event along with IEEE leaders. Negishi died last year at the age of 100. He was grateful that people enjoy karaoke around the world, his son, Akihiro Negishia, said at the ceremony, “though he didn’t imagine it to spread globally when he created it.” Accidentally inventing one of the world’s favorite pastimes Shigeichi Negishi grew up in Tokyo, where his mother ran a tobacco store and his father oversaw regional elections as a government official. After earning a bachelor’s degree in economics from Hosei University in Tokyo, he was drafted into the Imperial Japanese Army during World War II. He became a prisoner of war and spent two years in Singapore before being released in 1947. He returned to Tokyo and sold cameras for electrical parts manufacturer Olympus Corp. In 1956 he started Nichiden Kogyo, which manufactured and assembled portable radios for the home and car, according to the Engineering and Technology History Wiki entry about the karaoke machine. Negishi would start each morning singing along to the “Pop Songs Without Lyrics” radio show, according to a Forbes article. He typically didn’t sing in the office, but one fateful day he did. Negishi was inspired to engineer one of the 8-track tape decks his company manufactured into what is now known as the karaoke machine An 8-track tape deck can play and record audio using magnetic tape cartridges. Nichiden Kogyo’s Music Box was a 30-centimeter cube with slots to insert four 8-track tapes on the top panel, with control buttons to play, stop, or skip to the next song. Inside each 13-centimeter-long rectangular 8-track cartridge is a loop of almost 1 cm-wide magnetic tape that is coiled around a circular reel, as explained in an EverPresent blog post on the technology. A small motor inside each cartridge pulls the tape across an audio head inside the player, which reads the magnetic patterns and translates them into sound. Each tape had a metal sensing strip that notified a solenoid coil located in the player when a song had ended or if a person pressed the button to switch to the next song, according to an Autodesk Instribules blog post. The coil created a magnetic field when electricity passed through it—which rotated the spindle on which the audio head was mounted to move to the next track on the tape. Each tape could hold about eight songs. Negishi added a microphone amplifier to the player’s top panel, as well as a mixing circuit. The user could adjust the volume of the music and the microphone. He also recorded 20 of his favorite songs onto the tapes and printed out the lyrics on cardstock. He tested the machine by singing a popular ballad, “Mujo no Yume” (“The Heartless Dream”). “It works! That’s all I was thinking,” Negishi told reporter Matt Alt years later, when asked what his thoughts were the first time he tested the Music Box. Alt wrote Pure Invention: How Japan Made the Modern World. In 1969 engineers at Tokyo-based trading company Kokusai Shohin added a coin acceptor to the machine, renaming the Music Box the Sparko Box.Dr. Tomohiro Hase The fees to file a patent were too expensive, according to the ETHW entry, so in 1967 Negishi sold the rights to the machine to Mitsuyoshi Hamasu, a salesman at Kokusai Shohin. The Tokyo-based trading company began selling and leasing the machines by the end of the year. In 1969 engineers at Kokusai Shohin added a coin acceptor to the machine. The company renamed the Music Box the Sparko Box. In six years, about 8,000 units were sold, Hamasu said in an interview about the rise of karaoke. Karaoke became so popular that in the 1980s, venues and bars specializing in soundproofed rooms known as karaoke boxes emerged. Groups could rent the rooms by the hour. Negishi’s family owns the first Music Box he made. It still works. The Milestone plaque recognizing the karaoke machine is on display in front of the former headquarters of Nichiden Kogyo, which Negishi turned into a tobacco shop after he retired. The shop is now owned by his daughter. The plaque reads: “The first karaoke machine was created in 1967 by mixing live vocals with prerecorded accompaniment for public entertainment, leading to its worldwide popularity. Created by Shigeichi Negishi of Nichiden Kogyo, and originally called Music Box (later Sparko Box), it included a mixer, microphone, and 8-track tape player, with a coin payment system to charge the singer. An early operational machine has been displayed at the original company site in Tokyo.” Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world. The IEEE Tokyo Section sponsored the nomination.
An updated evolutionary model shows that living systems evolve in a split-and-hit-the-gas dynamic, where new lineages appear in sudden bursts rather than during a long marathon of gradual changes. The post The Sudden Surges That Forge Evolutionary Trees first appeared on Quanta Magazine
Back in the dawn of the 21st century, the American Chemical Society founded a new journal, Nano Letters, to feature letters-length papers about nanoscience and nanotechnology. This was coincident with the launch of the National Nanotechnology Initiative, and it was back before several other publishers put out their own nano-focused journals. For a couple of years now I've been an associate editor at NL, and it was a lot of fun to work with my fellow editors on putting together this roadmap, intended to give a snapshot of what we think the next quarter century might hold. I think some of my readers will get a kick out of it.