More from Blog - Practical Engineering
[Note that this article is a transcript of the video embedded above.] Foresthill Bridge soars across the valley of the North Fork of the American River just outside Auburn, California. At more than 700 feet or 200 meters above the canyon floor, it’s the fourth-tallest bridge in the United States. When it opened in 1973, crowds cheered for the impressive new structure. But if you take a closer look, it doesn’t really make any sense. This isn’t an interstate highway or even a major thoroughfare. The road sees only a few thousand vehicles a day, connecting Auburn, an exurb of Sacramento with a population just shy of 14,000, to scattered rural communities and recreation areas in the western foothills of the Sierra Nevadas. And while the American River does occasionally flood, it doesn’t flood 700 feet. Before this, the crossing was basically a low-water bridge. A structure of this magnitude just looks out of place. But it wasn’t just a boondoggle, at least not at the outset. It was built that way for a reason, and the story behind it is not only pretty wild, but it also sits at the hinge point of a major chapter in American infrastructure. I’m Grady, and this is Practical Engineering. California’s Central Valley is one of the world’s great agricultural regions: over 400 miles long, more than 50 miles wide, this remarkably fertile area is nearly half the size of England. The city of Sacramento sits near its center, right where the Sacramento and American Rivers meet. To manage and distribute water across this enormous landscape, the federal government launched the Central Valley Project in 1933, a sweeping effort by the U.S. Bureau of Reclamation to store water in the wetter northern part of the valley and distribute it to the drier south. In the process, the system would also generate hydropower and reduce flood risk for growing urban centers. I’m glossing over a lot here. The history of California is steeped in water issues, and even just the Central Valley Project is nearly a century of details. But, critically, Folsom Dam was one of the first big components of the plan. Built in 1955 on the American River, the concrete gravity dam provided significant flood protection to the City of Sacramento. However, it was constructed relatively early in our understanding of basin-scale hydrology and the uncertainty surrounding the frequency and magnitude of flooding over long periods of time. It became clear pretty quickly that Folsom Dam didn’t quite offer as much flood protection as was originally promised. Plus, because Folsom had to keep its flood pool empty to handle potential inflows, its ability to store water for irrigation or municipal supply purposes was somewhat limited. The answer to these problems, at least according to the federal government, was Auburn Dam, authorized by Congress in 1968. The new structure would sit upstream of Folsom and control the variable flows of the North and Middle Forks of the American River. It would be the tallest dam in California and one of the tallest in the country. And work began in earnest in the early 1970s. One of the first steps in the process was rerouting the American River. Crews built a large cofferdam and carved a diversion tunnel through the canyon wall. With the water redirected, they could begin drying out the bend in the river where the huge new dam would eventually sit. Once the site was dried out, crews began exploring the underlying geology more thoroughly. They drilled boreholes, excavated tunnels and shafts, and surveyed the rock that would serve as the dam’s foundation. The site’s geology turned out to be more complex than expected. Some zones of rock were more compressible than others, which could lead to dangerous stress concentrations in the dam. And, there were a lot of joints and fissures in the rock mass, making it more challenging to predict how they would behave under extreme loads, in addition to creating paths for water. So the next phase of the project was a major foundation treatment program starting in 1974. This mainly involved pressure grouting fractures to reinforce weak zones against the enormous weight of the structure and to make the geology more watertight, preventing seepage from flowing under the dam. With major construction works underway, anticipation for the reservoir was growing. Around the future rim, land values soared, and developers rushed to stake claims. Lakefront homes were planned. Entire communities emerged, built on the promise of a shining new shoreline. Then, in August 1975, a magnitude 5.9 earthquake struck near Oroville Dam, only about 50 miles or 80 kilometers away from the site. The quake only caused minor damage to structures in the area, but it rattled confidence in the Auburn project. The geology of the western Sierra Nevadas had long been considered stable. But the Oroville earthquake introduced a troubling possibility: that the loading and filling of large reservoirs could trigger seismic events in the area. This phenomenon, known as reservoir-induced seismicity, is still not well understood even to this day. The pressure of water infiltrating bedrock and the weight of a reservoir can change the balance of forces along faults, potentially triggering movement. You know, when Oroville is full, that’s roughly 10 billion pounds of force or 4 billion kilograms of mass. It’s a staggering amount. You can imagine how that might affect the underlying geology. The Auburn Dam, as a thin concrete arch, in contrast to the concrete gravity dam at Folsom or the earthfill embankment at Oroville, would be especially vulnerable to earthquakes. Thin-arch dams rely on the canyon walls to resist the thrust of the structure. In fact, I’ve made a video all about the topic you can check out after this! If one side shifts even a little during a quake, the results could be catastrophic. In April 1976, a report by the Association of Engineering Geologists concluded that an earthquake like the one at Oroville could cause the proposed Auburn Dam to catastrophically fail. It was back to the drawing board for the project, even as the foundation grouting program continued. And then the project was shaken again. That same year, the newly completed Teton Dam in Idaho collapsed during its first filling, killing 11 people and causing billions in damage. It had been built by the same agency, the Bureau of Reclamation. Concern continued to mount about the safety of Auburn Dam, which would have catastrophic consequences for the thousands of Californians downstream if it were to fail. It was all enough to bring Auburn’s momentum to a halt. While dam construction paused, one aspect of the project had already been finished: Foresthill Bridge. With a cofferdam on the river and the diversion tunnel only sized for smaller floods, there was a risk of overtopping the existing bridge, cutting off access between Auburn and the Sierra foothills. So, the Bureau of Reclamation decided to get a head start on a project that would eventually be inevitable: a new bridge, permanent and high enough to span the reservoir once it filled. If they were going to build a new bridge, they figured they might as well build it right the first time. The result was a striking steel cantilever bridge with two slender concrete piers soaring skyward from the canyon floor. [Actually, there was another bridge planned over the Middle Fork of the American River - the Ruck-a-Chucky Bridge. It was a wild idea: a curved cable-stayed bridge where all the cables are anchored in the hillsides rather than tall towers. But while that project was shelved, Foresthill made it all the way through design and construction.] At the time of its opening in 1973, it was the second-highest bridge in the United States. But as time went on, it became increasingly clear they had jumped the gun. By 1980, engineers floated two new dam designs that could withstand potential earthquakes. Both would be shifted slightly downstream from the original site. But by then, the tide of public and government support for the dam had turned. Construction costs had ballooned, and Auburn Dam was looking less feasible every day. As originally proposed, the structure would be even larger than the Hoover Dam size, but store less than 10% of Lake Mead’s volume. Meanwhile, upgrades to Folsom Dam and improved levees around Sacramento offered far cheaper ways to reduce the flood risk that was the major impetus for the dam in the first place. New hydrologic data also suggested that earlier flow estimates had been overly optimistic, reducing its value for conservation. The benefits of Auburn Dam were shrinking as the costs grew. It was turning into an incredibly expensive solution in search of a problem. At the same time, environmental and advocacy groups were gaining momentum. The project would flood canyons used for whitewater rafting and kayaking. It would drown ecosystems, inundate archaeological sites, and destroy long segments of the wild and scenic forks of the American River. It became clearer and clearer that the ends simply couldn’t justify the means. And yet, the idea never fully went away. In 1986, a massive flood hit the area. Water backed up at the diversion tunnel at Auburn, overtopped the cofferdam, and caused it to fail. Downstream levees were breached, and much of Sacramento flooded. For a moment, the momentum behind Auburn Dam and its promise of flood protection returned. But, it later became clear that the flood wasn’t entirely a natural disaster. The Bureau hadn’t followed the operating guidelines at Folsom Dam, worsening conditions downstream. And by then, grassroots opposition, cost concerns, and shifting priorities had all but put the Auburn Dam project to bed. Various proposals resurfaced over the years, including the idea of a “dry dam” that would only hold water during floods, but none gained much traction. With its many iterations and proposals, the project became known as the dam that wouldn’t die. But in 2008, the state of California revoked the Bureau’s water rights permit for the project, maybe not sealing its fate completely, but at least burying it several feet deeper. This story really gets to the heart of the challenge with large-scale public works projects. No matter how you configure them, there are big losers and big winners. There’s no doubt that a dam across the American River upstream of Folsom could provide significant benefits to the public: flood control, water supply, hydropower, recreational opportunities, or some combination of them all. But those benefits have to be weighed against real costs: environmental damage, staggering capital investment, long-term maintenance, the inherent risk of catastrophic failure, and the social toll of displacement and disruption. The mid-20th century was the heyday of American dam building, an era driven by ambition and optimism, but also by uncertainty. We didn’t have enough historical data to fully understand river systems. We couldn’t yet grasp the long-term consequences of altering them. And we couldn’t see into the future to know what the true impacts of these structures would be or what the cost of keeping them in good shape might amount to. Since then, we have a lot more experience with huge multi-purpose reservoirs. And it seems, in general, that the more we learn, the more the answer to whether they’re worth it seems to be: maybe not. And that maybe turns into a probably when you consider that all the best sites are already taken. New Melones Dam, completed by the Bureau of Reclamation in 1979, not too far from Auburn, faced a lot of similar controversy and pushback. Although the project was eventually completed, the fight was bitter, and its legacy so far is mixed. The project is widely considered to be the last great American dam. At least, great in size, if not in public sentiment. No other reservoir of that scale has been built in the U.S. since. And with the Auburn Dam project mostly dead, it seems doubtful there ever will be. The American River continued flowing through the diversion tunnel until 2007, when a new pump station and restoration project returned the river to its original channel. Kayakers can now navigate downstream, and even have some new features at the pump station to choose from: the artificial rapids on the left or the screen channel on the right. After more than three decades, the river was back in its place, tying a bow on a dam that was never built. And yet, just a few miles upstream, the Foresthill Bridge still stands, dramatic, overbuilt, and strangely out of sync with its surroundings. And we’re still kind of stuck taking care of this bridge, whose scale is so out of proportion with its purpose. In the 2010s, the bridge underwent a major seismic retrofit to improve its safety and make future inspections easier. Most recently, it was part of a nationwide program inspecting bridges built with T-1 steel, an alloy that, in some cases, has shown concerning cracking at welds. The I-40 bridge crack in Memphis, which I covered in an earlier video, triggered the effort. And there have been quite a few defects found in bridges since then, so here’s hoping that Foresthill doesn’t make the list. It’s a cool structure in its own right. But it stands for more than just an engineering achievement. Auburn Dam left a lot of scars, both on the physical landscape and the political one. But it also left this bridge that became more than just an out-of-place oddity. In a sense, it’s become a monument to the end of an era in US major public works projects, and, hopefully, a tribute to the caution and care that will shape the next one.
[Note that this article is a transcript of the video embedded above.] Flaming Gorge Dam rises from the Green River in northern Utah like a concrete wedge driven into the canyon, anchored against the sheer rock walls that flank it. It’s quintessential, in a way. It’s what we picture when we think about dams: a hulking, but also somehow graceful, wall of concrete stretching across a narrow rocky valley. But to dam engineers, there’s nothing quintessential about it. So-called arch dams are actually pretty rare. For reference, the US has about 92,000 dams listed in the national inventory. I couldn’t find an exact number, but based on a little bit of research, I estimate that we have maybe around 50 arch dams - it’s less than a tenth of a percent. The only reason we think of arch dams as archetypal is because they’re so huge. I counted 11 in the US that have their own visitor center. There just aren’t that many works of infrastructure that double as tourist destinations, and the reason for it is, I think, kind of interesting. Because an arch dam isn’t just an engineering solution to holding back water, and it’s not just a solution to holding back a lot of water. It’s all about height, and I built a little demo to show you what I mean. I’m Grady, and this is Practical Engineering. Engineers love categories, and dams are no exception. You can group them in a lot of ways, but mostly, we care about how they handle the incredible force of water they hold back. Embankment dams do it with earth or rock, relying on friction between the individual particles that make up the structure. Gravity dams do it with weight. Let me show you an example. I have my tried and trusted acrylic flume with a small plastic dam. Once this is all set up, I can start filling up the reservoir. This little dam is a little narrower than the flume. It doesn’t touch the sides, so it leaks a bit. The reason for that will be clear in a moment. And hopefully you can see what’s about to happen. This gravity dam doesn’t have much gravity in it, so it doesn’t take much water at all before you get a failure. I’m counting failure as the first sign of movement, by the way. That’s when the stabilizing forces are overcome by the destabilizing ones. And the little dam by itself could hold until my reservoir was about a quarter of the way to the top. Gravity dams get their stability against sliding from… you guessed it… friction. Bet you thought I was going to say gravity. And actually, it kind of is gravity, since frictional resistance is a function of just two variables: the normal force (in other words, the weight of the structure) and a coefficient that depends on the two materials touching. Engineers analyze the stability of gravity dams in cross-section, essentially taking a small slice of the structure. You want every slice to be able to support itself. That’s why I didn’t want the demo touching the sides of the flume; it would add resistance that doesn’t actually exist in a cross-section. The destabilizing force is hydrostatic pressure from the reservoir, which increases with depth. And the stabilizing force is friction. There are some complexities to this that we’ll get into, but very generally, as long as you have more friction than pressure, you’re good; you have a stable structure. So let’s add some normal force to the demo and see what happens. [Beat] You can see my little reservoir gets a little higher before the dam fails, about halfway to the top. And we can try it again with more weight. But the result gets a little more interesting… the dam didn’t actually slide this time, but it still failed. Turns out gravity dams have two major failure modes: sliding and overturning. Resistance to sliding comes from friction, which really doesn’t depend on how the weight of the dam is distributed. That’s not true for overturning failures. Let’s look back at our cross-section. For a unit width of dam, the hydrostatic pressure from the reservoir looks like this. Pressure increases with depth. And the area under this line is the total force pushing the dam downstream. We can simplify that distribution and treat it like it’s a single force, and it turns out when you do that, the force acts a third of the way up the total depth of water. Most dams want to rotate about the downstream toe, so you have a destabilizing force offset from the point of rotation. In other words, you have a torque, also called a moment. The dam has to create an opposite moment around that point to remain stable. Moment or torque is calculated as the force multiplied by its perpendicular distance from the point of rotation. So, the further the center of mass is from the downstream toe, the more stable the structure is, and the demo shows it too. Here’s where we left the weights the last time, and let’s see it happen again. The reservoir makes it about two-thirds of the way up the walls before the dam overturns. Let’s make a simple shift. Just move the weights further upstream and try again. It’s not a big difference. The reservoir reaches about three-quarters the way up before we see a sliding failure, but shifting the weights did increase the stability. And this is why a lot of gravity dams have a fairly consistent shape, with most of the weight concentrated on the upstream side, and usually a sloped or stepped downstream face. Interestingly, you can use the force of water against itself in a way. Watch what happens when I turn my little model around. Now the hydrostatic pressure applies both a destabilizing and stabilizing force, so you get more resistance for a given depth. A lot of deployable temporary storm barriers and cofferdam systems take advantage of this kind of configuration. You can imagine if I extended the base even further, I could create a structure that was self-stable just from its geometry alone. The weight of the water on the footing would overcome the lateral pressure. But there’s a catch to this. This is fully stable now, but watch what happens when I give the dam just a bit of a tilt. All of a sudden, it’s no longer stable. This might seem kind of intuitive, but I think it’s important to explain what’s actually going on. Hydrostatic pressure from the reservoir doesn’t only act on the face of a dam. With smooth plastic on smooth plastic, you get a pretty nice seal, but as soon as even a tiny gap opens, water gets underneath. Now there’s upward pressure on the bottom of the dam as well. If you’re depending on the downward force of a dam from its weight for stability, it’s easy to see why an upward force is a bad thing. And it’s so dramatic in the example with the upstream footing specifically. In that case, the downward pressure of the reservoir is acting as a stabilizing force, but if water can get underneath that footing, it basically cancels out. The pressure on the bottom is the same as the pressure on the top. But this isn’t only an issue in that case. The ground isn’t waterproof. In fact, I’ve done a video all about the topic. Soil and rock works more like a sponge than a solid material, and water can flow through them. That’s how we get aquifers and wells and springs and such. But it’s a problem for gravity dams, because water can seep below the structure and apply pressure to the bottom, essentially counteracting its weight. We call it uplift. Looking back at the cross-section, we can estimate this. Of course, you have the triangular pressure distribution along the upstream face. But at this point you have the full hydrostatic pressure also pushing upward. And at the downstream toe, you have no pressure (it’s exposed to the atmosphere). So, now you have a pressure distribution below the dam that looks like this. Of course, this part can get a lot more complicated since most dams don’t sit flush with the ground, and many are equipped with drains and cutoff walls, so definitely go check that other video out if you want to learn more. But let me show you the issue this causes with some recreational math on our cross-sectional slice of the dam. The taller the dam, the greater the uplift force. That happens linearly. In other words, the force is proportional to the depth of the reservoir. But look at the lateral force. Again, remember it’s the area under this triangle. Maybe you remember that formula: one-half times base times height. Well, the height is the depth of the water. And the base is also a function of the depth. More specifically, it’s the unit weight of water times depth. Multiply it together, and you see the challenge: the force increases as a function of the depth squared. So for every unit of additional height you want out of a gravity dam, you need significantly more weight to resist the forces, which means more material and thus a lot more cost. Hopefully all this exposition is starting to reveal a solution to this rapid divergence of stability and loads as a reservoir increases in height. Dams don’t actually float in space like my demonstration and graphics show. You know, by necessity, they extend across the entire valley and usually key into the abutments on either side. Naturally, that connection at the sides is going to offer some resistance to the forces dams need to withstand. And if you can count on that resistance, you can significantly lower the mass, and thus the cost, of the structure. But, again, this gets complicated. Let’s go back to the demo. Now I’m going to replace my gravity dam with something much simpler. Just a sheet of aluminum flashing, and, to simulate that resistance provided by socketing the structure into the earth, I’ve taped it to the bottom and sides… with some difficulty, actually. When I fill up the reservoir with water, it holds just fine. There’s a little leaking past my subpar tape job, but this is a fully stable structure. And I think the comparison here is pretty stark. When you can develop resistance from the sides you can get away with a lot less dam. But it’s harder than you might think to do that. For one, the natural soil or rock at a dam site might not be all that strong. The banks of rivers aren’t generally known for their stability, so the prospect of transferring enormous amounts of force into them rarely makes a lot of engineering sense. But the other challenge is in the dam itself. Take a look back at this demo. See how my dam is bending behind the force of the water. It’s holding there, but, you know, we don’t actually build dams out of aluminum flashing. Resisting loads in this way basically treats the dam like a beam, like a sideways bridge girder. Except, unlike girder bridges that usually only span up to a few hundred feet, dams are often much longer. Even the stiffest modern materials, like prestressed concrete boxes, would just deflect too much under load to transfer all the hydrostatic pressure across a valley into the abutments. Plus we usually don’t like to rely on steel too much in dams because of issues with corrosion and longevity. So where a typical beam experiences both tensile and compressive stress on opposite sides, we really need to transfer all that load, creating only compressive stress in the material. I’m sure you see where I’m going with this. How have we been building bridges for ages from materials like masonry where tensile stress isn’t an option? It’s arches! The arch is a special shape in engineering because you can transfer loads by putting the material in compression only, allowing for simpler, cheaper, and longer-lasting materials like masonry and concrete. You basically co-opt the geology for support, reducing the need for a massive structure. For completeness’s sake, let me show you how it works in the demo. I’ve formed a little arch from my thin sheet of aluminum. Now when I fill up the reservoir, there’s no deflection like the previous example. And again, side by side, it’s easy to see the benefits here. You get a lot more efficiency out of your materials than you do with an earthen embankment dam or a gravity structure. Of course, there are some drawbacks here. For one, arches create horizontal forces at the supports called thrusts that have to be resisted. Sites that use this design really require strong, competent rock in the abutments to withstand the enormous loads. And just like with bridges, the span matters. The wider the valley, the bigger the arch needs to be, so these dams generally only make sense in deep gorges and steep, narrow canyons. The engineering is a lot more complicated, too. You can’t use a simple 2D cross-section to demonstrate stability. The structural behavior is inherently three-dimensional, which is tougher to characterize, especially when you consider unusual conditions like earthquakes and temperature effects. And since they’re lighter, arch dams don’t resist uplift forces very well, making foundation drainage systems more critical. All this means that it’s really only a solution that makes economic sense in a narrow range of circumstances, one of the most important being height. For smaller dams, the additional complexity and expense of designing and building an arch aren’t justified by the structural efficiency. Gravity and embankment dams are much more adaptable to a wider range of site conditions. And there are other types of dams, too, that blend these ideas. Multiple-arch dams use a series of smaller arches supported by buttresses, dividing the span into more manageable components. Even what is perhaps the most famous arch dam in the world - Hoover Dam - isn’t a pure arch structure. Technically, it’s a gravity-arch dam, meaning it resists part of the water load through mass while also distributing the forces into the canyon through arch action. The proportions are carefully balanced to take advantage of the unique site conditions and relatively wider canyon than most arch dams are built in. And so, when you look at the tallest dams on Earth, one structural form dominates. By my estimation, around 40 percent of the tallest 200 dams in the world incorporate an arch into their design. There aren’t that many places where it makes sense, but when you compare what it takes to hold a reservoir back in a narrow canyon valley, I think the case for arches is pretty clear.
[Note that this article is a transcript of the video embedded above.] In the early 1900s, Seattle was a growing city hemmed in by geography. To the west was Puget Sound, a vital link to the Pacific Ocean. To the east, Lake Washington stood between the city and the farmland and logging towns of the Cascades. As the population grew, pressure mounted for a reliable east–west transportation route. But Lake Washington wasn’t easy to cross. Carved by glaciers, the lake is deceptively deep, over 200 feet or 60 meters in some places. And under that deep water sits an even deeper problem: a hundred-foot layer of soft clay and mud. Building bridge piers all the way to solid ground would have required staggeringly sized supports. The cost and complexity made it infeasible to even consider. But in 1921, an engineer named Homer Hadley proposed something radical: a bridge that didn’t rest on the bottom at all. Instead, it would float on massive hollow concrete pontoons, riding on the surface like a ship. It took nearly two decades for his idea to gain traction, but with the New Deal and Public Works Administration, new possibilities for transportation routes across the country began to open up. Federal funds flowed, and construction finally began on what would become the Lacey V. Murrow Bridge. When it opened in 1940, it was the first floating concrete highway of its kind, a marvel of engineering and a symbol of ingenuity under constraint. But floating bridges, by their nature, carry some unique vulnerabilities. And fifty years later, this span would be swallowed by the very lake it crossed. Between that time and since, the Seattle area has kind of become the floating concrete highway capital of the world. That’s not an official designation, at least not yet, but there aren’t that many of these structures around the globe. And four of the five longest ones on Earth are clustered in one small area of Washington state. You have Hood Canal, Evergreen Point, Lacey V Murrow, and its neighbor, the Homer M. Hadley Memorial Bridge, named for the engineer who floated the idea in the first place. Washington has had some high-profile failures, but also some remarkable successes, including a test for light rail transit over a floating bridge just last month in June 2025. It's a niche branch of engineering, full of creative solutions and unexpected stories. So I want to take you on a little tour of the hidden engineering behind them. I’m Grady, and this is Practical Engineering. Floating bridges are basically as old as recorded history. It’s not a complicated idea: place pontoons across a body of water, then span them with a deck. For thousands of years, this straightforward solution has provided a fast and efficient way to cross rivers and lakes, particularly in cases where permanent bridges were impractical or when the need for a crossing was urgent. In fact, floating bridges have been most widely used in military applications, going all the way back to Xerxes crossing the Dardanelles in 480 BCE. They can be made portable, quick to erect, flexible to a wide variety of situations, and they generally don’t require a lot of heavy equipment. There are countless designs that have been used worldwide in various military engagements. But most floating bridges, both ancient and modern, weren’t meant to last. They’re quick to put up, but also quick to take out, either on purpose or by Mother Nature. They provide the means to get in, get across, and get out. So they aren’t usually designed for extreme conditions. Transitioning from temporary military crossings to permanent infrastructure was a massive leap, and it brought with it a host of engineering challenges. An obvious one is navigation. A bridge that floats on the surface of the water is, by default, a barrier to boats. So, permanent floating bridges need to make room for maritime traffic. Designers have solved this in several ways, and Washington State offers a few good case studies. The Evergreen Point Floating Bridge includes elevated approach spans on either end, allowing ships to pass beneath before the road descends to water level. The original Lacey V. Murrow Bridge took a different approach. Near its center, a retractable span could be pulled into a pocket formed by adjacent pontoons, opening a navigable channel. But, not only did the movable span create interruptions to vehicle traffic on this busy highway, it also created awkward roadway curves that caused frequent accidents. The mechanism was eventually removed after the East Channel Bridge was replaced to increase its vertical clearance, providing boats with an alternative route between the two sides of Lake Washington. Further west, the Hood Canal Bridge incorporates truss spans for smaller craft. And it has hydraulic lift sections for larger ships. The US Naval Base Kitsap is not far away, so sometimes the bridge even has to open for Navy submarines. These movable spans can raise vertically above the pontoons, while adjacent bridge segments slide back underneath. The system is flexible: one side can be opened for tall but narrow vessels, or both for wider ships. But floating bridges don’t just have to make room for boats. In a sense, they are boats. Many historical spans literally floated on boats lashed together. And that comes with its own complications. Unlike fixed structures, floating bridges are constantly interacting with water: waves, currents, and sometimes even tides and ice. They’re easiest to implement on calm lakes or rivers with minimal flooding, but water is water, and it’s a totally different type of engineering when you’re not counting on firm ground to keep things in place. We don’t just stretch floating bridges across the banks and hope for the best. They’re actually moored in place, usually by long cables and anchors, to keep materials from overstressing and to prevent movements that would make the roadway uncomfortable or dangerous. Some anchors use massive concrete slabs placed on the lakebed. Others are tied to piles driven deep into the ground. In particularly deep water or soft soil, anchors are lowered to the bottom with water hoses that jet soil away, allowing the anchor to sink deep into the mud. These anchoring systems do double duty, providing both structural integrity and day-to-day safety for drivers, but even with them, floating bridges have some unique challenges. They naturally sit low to the water, which means that in high winds, waves can crash directly onto the roadway, obscuring the visibility and creating serious risks to road users. Motion from waves and wind can also cause the bridge to flex and shift beneath vehicles, especially unnerving for drivers unused to the sensation. In Washington State, all the major floating bridges have been closed at various times due to weather. The DOT enforces wind thresholds for each bridge; if the wind exceeds the threshold, the bridge is closed to traffic. Even if the bridge is structurally sound, these closures reflect the reality that in extreme weather, the bridge itself becomes part of the storm. But we still haven’t addressed the floating elephant in the pool here: the concrete pontoons themselves. Floating bridges have traditionally been made of wood or inflatable rubber, which makes sense if you’re trying to stay light and portable. But permanent infrastructure demands something more durable. It might seem counterintuitive to build a buoyant structure out of concrete, but it’s not as crazy as it sounds. In fact, civil engineering students compete every year in concrete canoe races hosted by the American Society of Civil Engineers. Actually, I was doing a little recreational math to find a way to make this intuitive, and I stumbled upon a fun little fact. If you want to build a neutrally buoyant, hollow concrete cube, there’s a neat rule of thumb you can use. Just take the wall thickness in inches, and that’s your outer dimension in feet. Want 12-inch-thick concrete walls? You’ll need a roughly 12-foot cube. This is only fun because of the imperial system, obviously. It’s less exciting to say that the two dimensions have a roughly linear relationship with a factor of 12. And I guess it’s not really that useful except that it helps to visualize just how feasible it is to make concrete float. Of course, real pontoons have to do more than just barely float themselves. They have to carry the weight of a deck and whatever crosses it with an acceptable margin of safety. That means they’re built much larger than a neutrally buoyant box. But mass isn’t the only issue. Concrete is a reliable material and if you’ve watched the channel for a while, you know that there are a few things you can count on concrete to do, and one of them is to crack. Usually not a big deal for a lot of structures, but that’s a pretty big problem if you’re trying to keep water out of a pontoon. Designers put enormous effort into preventing leaks. Modern pontoons are subdivided into sealed chambers. Watertight doors are installed between the chambers so they can still be accessed and inspected. Leak detection systems provide early warnings if anything goes wrong. And piping is pre-installed with pumps on standby, so if a leak develops, the chambers can be pumped dry before disaster strikes. The concrete recipe itself gets extra attention. Specialized mixes reduce shrinkage, improve water resistance, and resist abrasion. Even temperature control during curing matters. For the replacement of the Evergreen Point Bridge, contractors embedded heating pipes in the base slabs of the pontoons, allowing them to match the temperature of the walls as they were cast. This enabled the entire structure to cool down at a uniform rate, reducing thermal stresses that could lead to cracking. There were also errors during construction, though. A flaw in the post-tensioning system led to millions of dollars in change orders halfway through construction and delayed the project significantly while they worked out a repair. But there’s a good reason why they were so careful to get the designs right on that project. Of the four floating bridges in Washington state, two of them have sunk. In February 1979, a severe storm caused the western half of the Hood Canal Bridge to lose its buoyancy. Investigations revealed that open hatches allowed rain and waves to blow in, slowly filling the pontoons and ultimately leading to the western half of the bridge sinking. The DOT had to establish a temporary ferry service across the canal for nearly four years while the western span was rebuilt. Then, in 1990, it happened again. This time, the failure occurred during rehabilitation work on the Lacey V. Murrow Bridge while it was closed. Contractors were using hydrodemolition, high-pressure water jets, to remove old concrete from the road deck. Because the water was considered contaminated, it had to be stored rather than released into Lake Washington. Engineers calculated that the pontoon chambers could hold the runoff safely. To accommodate that, they removed the watertight doors that normally separated the internal compartments. But, when a storm hit over Thanksgiving weekend, water flooded into the open chambers. The bridge partially sank, severing cables on the adjacent Hadley Bridge and delaying the project by more than a year - a potent reminder that even small design or operational oversights can have major consequences on this type of structure. And we still have a lot to learn. Recently, Sound Transit began testing light rail trains on the Homer Hadley Bridge, introducing a whole new set of engineering puzzles. One is electricity. With power running through the rails, there was concern about stray currents damaging the bridge. To prevent this, the track is mounted on insulated blocks, with drip caps to prevent water from creating a conductive path. And then there’s the bridge movement. Unlike typical bridges, a floating bridge can roll, pitch, and yaw with weather, lake level, and traffic loads. The joints between the fixed shoreline and the bridge have to be able to accommodate movement. It’s usually not an issue for cars, trucks, bikes, or pedestrians, but trains require very precise track alignment. Engineers had to develop an innovative “track bridge” system. It uses specialized bearings to distribute every kind of movement over a longer distance, keeping tracks aligned even as the floating structure shifts beneath it. Testing in June went well, but there’s more to be done before you can ride the Link light rail across a floating highway. If floating bridges are the present, floating tunnels might be the future. I talked about immersed tube tunnels in a previous video. They’re used around the world, made by lowering precast sections to the seafloor and connecting them underwater. But what if, instead of resting on the bottom, those tunnels floated in the water column? It should be possible to suspend a tunnel with negative buoyancy using surface pontoons or even tether one with positive buoyancy to the bottom using anchors. In deep water, this could dramatically shorten tunnel lengths, reduce excavation costs, and minimize environmental impacts. Norway has actually proposed such a tunnel across a fjord on its western coast, a project that, if realized, would be the first of its kind. Like floating bridges before it, this tunnel will face a long list of unknowns. But that’s the essence of engineering: meeting each challenge with solutions tailored to a specific place and need. There aren’t many locations where floating infrastructure makes sense. The conditions have to be just right - calm waters, minimal ice, manageable tides. But where the conditions do allow, floating bridges and their hopefully future descendants open up new possibilities for connection, mobility, and engineering.
[Note that this article is a transcript of the video embedded above.] There’s a new trend in high-rise building design. Maybe you’ve seen this in your city. The best lots are all taken, so developers are stretching the limits to make use of space that isn’t always ideal for skyscrapers. They’re not necessarily taller than buildings of the past, but they are a lot more slender. “Pencil tower” is the term generally used to describe buildings that have a slenderness ratio of more than around 10 to 1, height to width. A lot of popular discussion around skyscrapers is about how tall we can build them. Eventually, you can get so tall that there are no materials strong enough to support the weight. But, pencil towers are the perfect case study in why strength isn’t the only design criterion used in structural engineering. Of course, we don’t want our buildings to fall down, but there’s other stuff we don’t want them to do, too, including flex and sway in the wind. In engineering, this concept is called the serviceability limit state, and it’s an entirely separate consideration from strength. Even if moderate loads don’t cause a structure to fail, the movement they cause can lead to windows breaking, tiles cracking, accelerated fatigue of the structure, and, of course, people on the top floors losing their lunch from disorientation and discomfort. So, limiting wind-induced motions is a major part of high-rise design and, in fact, can be such a driving factor in the engineering of the building that strength is a secondary consideration. Making a building stiffer is the obvious solution. But adding stiffness requires larger columns and beams, and those subtract valuable space within the building itself. Another option is to augment a building’s aerodynamic performance, reducing the loads that winds impose. But that too can compromise the expensive floorspace within. So many engineers are relying on another creative way to limit the vibrations of tall buildings. And of course, I built a model in the garage to show you how this works. I’m Grady, and this is Practical Engineering. One of the very first topics I ever covered on this channel was tuned mass dampers. These are mechanisms that use a large, solid mass to counteract motion in all kinds of structures, dissipating the energy through friction or hydraulics, like the shock absorbers in vehicles. Probably the most famous of these is in the Taipei 101 building. At the top of the tower is a massive steel pendulum, and instead of hiding it away in a mechanical floor, they opened it to visitors, even giving the damper its own mascot. But, mass dampers have a major limitation because of those mechanical parts. The complex springs, dampers, and bearings need regular maintenance, and they are custom-built. That gets pretty expensive. So, what if we could simplify the device? This is my garage-built high-rise. It’s not going to hold many conference room meetings, but it does do a good job swaying from side to side, just like an actual skyscraper. And I built a little tank to go on top here. The technical name for this tank is a tuned liquid column damper, and I can show you how it works. Let’s try it with no water first. Using my digitally calibrated finger, I push the tower over by a prescribed distance, and you can see this would not be a very fun ride. There is some natural damping, but the oscillation goes on for quite a while before the motion stops. Now, let’s put some water in the tank. With the power of movie magic, I can put these side by side so you can really get a sense of the difference. By the way, nearly all of the parts for this demonstration were provided by my friends at Send-Cut-Send. I don’t have a milling machine or laser cutter, so this is a really nice option for getting customized parts made from basically any material - aluminum, steel, acrylic - that are ready to assemble. Instead of complex mechanical devices, liquid column dampers dissipate energy through the movement of water. The liquid in the tank is both the mass and the damper. This works like a pendulum where the fluid oscillates between two columns. Normally, there’s an orifice between the two columns that creates the damping through friction loss as water flows from one side to the other. To make this demo a little simpler, I just put lids on the columns with small holes. I actually bought a fancy air valve to make this adjustable, but it didn’t allow quite enough airflow. So instead, I simplified with a piece of tape. Very technical. Energy transferred to the water through the building is dissipated by the friction of the air as it moves in and out of the columns. And you can even hear this as it happens. Any supplemental damping system starts with a design criterion. This varies around the world, but in the US, this is probability-based. We generally require that peak accelerations with a 1-in-10 chance of being exceeded in a given year be limited to 15-18 milli-gs in residential buildings and 20-25 milli-gs in offices. For reference, the lateral acceleration for highway curve design is usually capped at 100 milli-gs, so the design criteria for buildings is between a fourth and a sixth of that. I think that makes intuitive sense. You don’t want to feel like you’re navigating a highway curve while you sit at your desk at work. It’s helpful to think of these systems in a simplified way. This is the most basic representation: a spring, a damper, and mass on a cart. We know the mass of the building. We can estimate its stiffness. And the building itself has some intrinsic damping, but usually not much. If we add the damping system onto the cart, it’s basically just the same thing at a smaller scale, and the design process is really just choosing the mass and damping systems for the remaining pieces of this puzzle to achieve the design goal. The mass of liquid dampers is usually somewhere between half a percent to two percent of the building’s total weight. The damping is related to the water’s ability to dissipate energy. And the spring needs to be tuned to the building. All buildings vibrate at a natural frequency related to their height and stiffness. Think of it like a big tuning fork full of offices or condos. I can estimate my model’s natural frequency by timing the number of oscillations in a given time interval. It’s about 1.3 hertz or cycles per second. In an ideal tuned damper, the oscillation of the damping system matches that of the building. So tuning the frequency of the damper is an important piece of the puzzle. For a tuned liquid column damper, the tuning mostly comes from the length of the liquid flow path. A longer path results in a lower frequency. The compression of the air above the column in my demo affects this too, and some types of dampers actually take advantage of that phenomenon. I got the best tuning when the liquid level was about halfway up the columns. The orifice has less of an effect on frequency and is used mostly to balance the amount of damping versus the volume of liquid that flows through each cycle. In my model, with one of the holes completely closed off, you can see the water doesn’t move, and you get minimal damping. With the tape mostly covering the hole, you get the most frictional loss, but not all the fluid flows from one side to the other each cycle. When I covered about half of one hole, I got the full fluid flow and the best damping performance. The benefit of a tuned column damper is that it doesn’t take up a lot of space. And because the fluid movement is confined, they’re fairly predictable in behavior. So, these are used in quite a few skyscrapers, including the Random House Tower in Manhattan, One Wall Center in Vancouver (which actually has many walls), and Comcast Center in Philadelphia. But, tuned column liquid dampers have a few downsides. One is that they really only work for flexible structures, like my demo. Just like in a pendulum, the longer the flow path in a column damper, the lower the frequency of the oscillation. For stiffer buildings with higher natural frequencies, tuning requires a very short liquid column, which limits the mass and damping capability to a point where you don’t get much benefit. The other thing is that this is still kind of a complex device with intricate shapes and a custom orifice between the two columns. So, we can get even simpler. This is my model tuned sloshing damper, and it’s about as simple as a damper can get. I put a weight inside the empty tank to make a fair comparison, and we can put it side by side with water in the tank to see how it works. As you can see, sloshing dampers dissipate energy by… sloshing. Again, the water is both the mass and the damper. If you tune it just right, the sloshing happens perfectly out of phase of the motion of the building, reducing the magnitude of the movement and acceleration. And you can see why this might be a little cheaper to build - it’s basically just a swimming pool - four concrete walls, a floor, and some water. There’s just not that much to it. But the simplicity of construction hides the complexity of design. Like a column damper, the frequency of a sloshing damper can be tuned, first by the length of the tank. Just like fretting a guitar string further down the neck makes the note lower, a tank works the same way. As the tank gets longer, its sloshing frequency goes down. That makes sense - it takes longer for the wave to get from one side to the other. But you can also adjust the depth. Waves move slower in shallower water and faster in deeper water. Watch what happens when I overfill the tank. The initial wave starts on the left as the building goes right. It reaches the right side just as the building starts moving left. That’s what we want; it’s counteracting the motion. But then it makes it back to the left before the building starts moving right. It’s actually kind of amplifying the motion, like pushing a kid on a swing. Pretty soon after that, the wave and the building start moving in phase, so there’s pretty much no damping at all. Compare it to the more properly tuned example where most of the wave motion is counteracting the building motion as it sways back and forth. You can see in my demo that a lot of the energy dissipation comes from the breaking waves as they crash against the sides of the tank. That is a pretty complicated phenomenon to predict, and it’s highly dependent on how big the waves are. And even with the level pretty well tuned to the frequency of the building, you can see there’s a lot of complexity in the motion with multiple modes of waves, and not all of them acting against the motion of the building. So, instead of relying on breaking waves, most sloshing dampers use flow obstructions like screens, columns, or baffles. I got a few different options cut out of acrylic so we can try this out. These baffles add drag, increasing the energy dissipation with the water, usually without changing the sloshing frequency. Here’s a side-by-side comparison of the performance without a baffle and with one. You can see that the improvement is pretty dramatic. The motion is more controlled and the behavior is more linear, making this much simpler to predict during the design phase. It’s kind of the best of both worlds since you get damping from the sloshing and the drag of the water passing through the screen. Almost all the motion is stopped in this demo after only three oscillations. I was pretty impressed with this. Here’s all three of the baffle runs side by side. Actually, the one with the smallest holes worked the best in my demo, but deciding the configuration of these baffles is a big challenge in the engineering of these systems because you can’t really just test out a bunch of options at full scale. Devices like this are in service in quite a few high-rise buildings, including Princess Tower in Dubai, and the Museum Tower in Dallas. With no moving parts and very little maintenance except occasionally topping it off to keep the water at the correct level, you can see how it would be easy to choose a sloshing damper for a new high-rise project. But there are some disadvantages. One is volumetric efficiency. You can see that not all the water in the tank is mobilized, especially for smaller movements, which means not all the water is contributing to the damping. The other is non-linearity. The amount of damping changes depending on the magnitude of the movement since drag is related to velocity squared. And even the frequency of the damper isn’t constant; it can change with the wave amplitude as well because of the breaking waves. So you might get good performance at the design level, but not so much for slower winds. Dampers aren’t just used in buildings. Bridges also take advantage of these clever devices, especially on the decks of pedestrian bridges and the towers of long-span bridges. This also happens at a grand scale between the Earth and moon. Tidal bulges in the oceans created by the moon’s tug on Earth dissipate energy through friction and turbulence, which is a big part of why our planet’s rotation is slowing over time. Days used to be a lot shorter when the Earth was young, but we have a planet-scale liquid damper constantly dissipating our rotational energy. But whether it’s bridges or buildings, these dampers usually don’t work perfectly right at the start. Vibrations are complicated. They’re very hard to predict, even with modern tools like simulation software and scale physical models. So, all dampers have to go through a commissioning process. Usually this involves installing accelerometers once construction is nearing completion to measure the structure’s actual natural frequency. The tuning of tuned dampers doesn’t just happen during the design phase; you want some adjustability after construction to make sure they match the structure’s natural frequency exactly so you get the most damping possible. For liquid dampers, that means adjusting the levels in the tanks. And in many cases, buildings might use multiple dampers tuned to slightly different frequencies to improve the performance over a range of conditions. Even in these two basic categories, there is a huge amount of variability and a lot of ongoing research to minimize the tradeoffs these systems come with. The truth is that, relatively speaking, there aren’t that many of these systems in use around the world. Each one is highly customized, and even putting them into categories can get a little tricky. There are even actively controlled liquid dampers. My tuning for the column damper works best for a single magnitude of motion, but you can see that once the swaying gets smaller, the damper isn’t doing a lot to curb it. You can imagine if I constantly adjusted the size of the orifice, I could get better performance over a broader range of unwanted motion. You can do this electronically by having sensors feed into a control system that adjusts a valve position in real-time. Active systems and just the flexibility to tune a damper in general also help deal with changes over time. If a building’s use changes, if new skyscrapers nearby change the wind conditions, or if it gets retrofits that change its natural frequency, the damping system can easily accommodate those changes. In the end, a lot of engineering decisions come down to economics. In most cases, damping is less about safety and more about comfort, which is often harder to pin down. Engineers and building owners face a balancing act between the cost of supplemental damping and the value of the space those systems take up. Tuned mass dampers are kind of household names when it comes to damping. A few buildings like Shanghai Center and Taipei 101 have made them famous. They’re usually the most space-efficient (since steel and concrete are more dense than water). But they’re often more costly to install and maintain. Liquid dampers are the unsung heroes. They take up more space, but they’re simple and cost-effective, especially if the fire codes already require you to have a big tank of water at the top of your building anyway. Maybe someday, an architect will build one out of glass or acrylic, add some blue dye and mica powder, and put it on display as a public showcase. Until then, we’ll just have to know it’s there by feel.
More in science
Episode five of the Works in Progress podcast is about why China outbuilds America
Animals of all kinds mix and mingle in underground burrows, offering troubling opportunities for diseases to jump species. Read more on E360 →
Millions of people worldwide have reason to be thankful that Swedish engineer Rune Elmqvist decided not to practice medicine. Although qualified as a doctor, he chose to invent medical equipment instead. In 1949, while working at Elema-Schonander (later Siemens-Elema), in Stockholm, he applied for a patent for the Mingograph, the first inkjet printer. Its movable nozzle deposited an electrostatically controlled jet of ink droplets on a spool of paper. Rune Elmqvist qualified to be a physician, but he devoted his career to developing medical equipment, like this galvanometer.Håkan Elmqvist/Wikipedia Elmqvist demonstrated the Mingograph at the First International Congress of Cardiology in Paris in 1950. It could record physiological signals from a patient’s electrocardiogram or electroencephalogram in real time, aiding doctors in diagnosing heart and brain conditions. Eight years later, he worked with cardiac surgeon Åke Senning to develop the first fully implantable pacemaker. So whether you’re running documents through an inkjet printer or living your best life due to a pacemaker, give a nod of appreciation to the inventive Dr. Elmqvist. The world’s first inkjet printer Rune Elmqvist was an inquisitive person. While still a student, he invented a specialized potentiometer to measure pH and a portable multichannel electrocardiograph. In 1940, he became head of development at the Swedish medical electronics company Elema-Schonander. Before the Mingograph, electrocardiograph machines relied on a writing stylus to trace the waveform on a moving roll of paper. But friction between the stylus and the paper prevented small changes in the electrical signal from being accurately recorded. Elmqvist’s initial design was a modified oscillograph. Traditionally, an oscillograph used a mirror to reflect a beam of light (converted from the electrical signal) onto photographic film or paper. Elmqvist swapped out the mirror for a small, moveable glass nozzle that continuously sprayed a thin stream of liquid onto a spool of paper. The electrical signal electrostatically controlled the jet. The Mingograph was originally used to record electrocardiograms of heart patients. It soon found use in many other fields.Siemens Healthineers Historical Institute By eliminating the friction of a stylus, the Mingograph (which the company marketed as the Mingograf) was able to record more detailed changes of the heartbeat. The machine had three paper-feed speeds: 10, 25, and 50 millimeters per second. The speed could be preset or changed while in operation. RELATED: The Inventions That Made Heart Disease Less Deadly An analog input jack on the Mingograph could be used to take measurements from other instruments. Researchers in disciplines far afield from medicine took advantage of this input to record pressure or sound. Phoneticians used it to examine the acoustic aspects of speech, and zoologists used it to record birdsongs. Throughout the second half of the 20th century, scientists cited the Mingograph in their research papers as an instrument for their experiments. Today, the Mingograph isn’t that widely known, but the underlying technology, inkjet printing, is ubiquitous. Inkjets dominate the home printer market, and specialized printers print DNA microarrays in labs for genomics research, create electrical traces for printed circuit boards, and much more, as Phillip W. Barth and Leslie A. Field describe in their 2024 IEEE Spectrum article “Inkjets Are for More Than Just Printing.” The world’s first implantable pacemaker Despite the influence of the Mingograph on the evolution of printing, it is arguably not Elmqvist’s most important innovation. The Mingograph helped doctors diagnose heart conditions, but it couldn’t save a patient’s life by itself. One of Elmqvist’s other inventions could and did: the first fully implantable, rechargeable pacemaker. The first implantable pacemaker [left] from 1958 had batteries that needed to be recharged once a week. The 1983 pacemaker [right] was programmable, and its batteries lasted several years.Siemens Healthineers Historical Institute Like many stories in the history of technology, this one was pushed into fruition at the urging of a woman, in this case Else-Marie Larsson. Else-Marie’s 43-year-old husband, Arne, suffered from scarring of his heart tissue due to a viral infection. His heart beat so slowly that he constantly lost consciousness, a condition known as Stokes-Adams syndrome. Else-Marie refused to accept his death sentence and searched for an alternative. After reading a newspaper article about an experimental implantable pacemaker being developed by Elmqvist and Senning at the Karolinska Hospital in Stockholm, she decided that her husband would be the perfect candidate to test it out, even though it had been tried only on animals up until that point. External pacemakers—that is, devices outside the body that regulated the heart beat by applying electricity—already existed, but they were heavy, bulky, and uncomfortable. One early model plugged directly into a wall socket, so the user risked electric shock. By comparison, Elmqvist’s pacemaker was small enough to be implanted in the body and posed no shock risk. Fully encased in an epoxy resin, the disk-shaped device had a diameter of 55 mm and a thickness of 16 mm—the dimensions of the Kiwi Shoe Polish tin in which Elmqvist molded the first prototypes. It used silicon transistors to pace a pulse with an amplitude of 2 volts and duration of 1.5 milliseconds, at a rate of 70 to 80 beats per minute (the average adult heart rate). The pacemaker ran on two rechargeable 60-milliampere-hour nickel-cadmium batteries arranged in series. A silicon diode connected the batteries to a coil antenna. A 150-kilohertz radio loop antenna outside the body charged the batteries inductively through the skin. The charge lasted about a week, but it took 12 hours to recharge. Imagine having to stay put that long. In 1958, over 30 years before this photo, Arne Larsson [right] received the first implantable pacemaker, developed by Rune Elmqvist [left] at Siemens-Elema. Åke Senning [center] performed the surgery.Sjöberg Bildbyrå/ullstein bild/Getty Images Else-Marie’s persuasion and persistence pushed Elmqvist and Senning to move from animal tests to human trials, with Arne as their first case study. During a secret operation on 8 October 1958, Senning placed the pacemaker in Arne’s abdomen wall with two leads implanted in the myocardium, a layer of muscle in the wall of the heart. The device lasted only a few hours. But its replacement, which happened to be the only spare at the time, worked perfectly for six weeks and then off and on for several more years. Arne Larsson lived another 43 years after his first pacemaker was implanted. Shown here are five of the pacemakers he received. Sjöberg Bildbyrå/ullstein bild/Getty Images Arne Larsson clearly was happy with the improvement the pacemaker made to his quality of life because he endured 25 more operations over his lifetime to replace each failing pacemaker with a new, improved iteration. He managed to outlive both Elmqvist and Senning, finally dying at the age of 86 on 28 December 2001. Thanks to the technological intervention of his numerous pacemakers, his heart never gave out. His cause of death was skin cancer. Today, more than a million people worldwide have pacemakers implanted each year, and an implanted device can last up to 15 years before needing to be replaced. (Some pacemakers in the 1980s used nuclear batteries, which could last even longer, but the radioactive material was problematic. See “The Unlikely Revival of Nuclear Batteries.”) Additionally, some pacemakers also incorporate a defibrillator to shock the heart back to a normal rhythm when it gets too far out of sync. This lifesaving device certainly has come a long way from its humble start in a shoe polish tin. Rune Elmqvist’s legacy Whenever I start researching the object of the month for Past Forward, I never know where the story will take me or how it might hit home. My dad lived with congestive heart failure for more than two decades and absolutely loved his pacemaker. He had a great relationship with his technician, Francois, and they worked together to fine-tune the device and maximize its benefits. And just like Arne Larsson, my dad died from an unrelated cause. An engineer to the core, he would have delighted in learning about the history of this fantastic invention. And he probably would have been tickled by the fact that the same person also invented the inkjet printer. My dad was not a fan of inkjets, but I’m sure he would have greatly admired Rune Elmqvist, who saw problems that needed solving and came up with elegantly engineered solutions. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the September 2025 print issue. References There is frustratingly little documented information about the Mingograph’s origin story or functionality other than its patent. I pieced together how it worked by reading the methodology sections of various scientific papers, such as Alf Nachemson’s 1960 article in Acta Orthopaedica Scandinavica, “Lumbar Intradiscal Pressure: Experimental Studies on Post-mortem Material”; Ingemar Hjorth’s 1970 article in the Journal of Theoretical Biology, “A Comment on Graphic Displays of Bird Sounds and Analyses With a New Device, the Melograph Mona”; and Paroo Nihalani’s 1975 article in Phonetica, “Velopharyngeal Opening in the Formation of Voiced Stops in Sindhi.” Such sources reveal how this early inkjet printer moved from cardiology into other fields. Descriptions of Elmqvist’s pacemaker were much easier to find, with Mark Nicholls’s 2007 profile “Pioneers of Cardiology: Rune Elmqvist, M.D.,” in Circulation: Journal of the American Heart Association, being the main source. Siemens also pays tribute to the pacemaker on its website; see, for example, “A Lifesaver in a Plastic Cup.”
An updated evolutionary model shows that living systems evolve in a split-and-hit-the-gas dynamic, where new lineages appear in sudden bursts rather than during a long marathon of gradual changes. The post The Sudden Surges That Forge Evolutionary Trees first appeared on Quanta Magazine
Back in the dawn of the 21st century, the American Chemical Society founded a new journal, Nano Letters, to feature letters-length papers about nanoscience and nanotechnology. This was coincident with the launch of the National Nanotechnology Initiative, and it was back before several other publishers put out their own nano-focused journals. For a couple of years now I've been an associate editor at NL, and it was a lot of fun to work with my fellow editors on putting together this roadmap, intended to give a snapshot of what we think the next quarter century might hold. I think some of my readers will get a kick out of it.