Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
1
[Note that this article is a transcript of the video embedded above.] The original plan to get I-95 over the Baltimore Harbor was a double-deck bridge from Fort McHenry to Lazaretto Point. The problem with the plan was this: the bridge would have to be extremely high so that large ships could pass underneath, dwarfing and overshadowing one of the US’s most important historical landmarks. Fort McHenry famously repelled a massive barrage and attack from the British Navy in the War of 1812, and inspired what would later become the national anthem. An ugly bridge would detract from its character, and a beautiful one would compete for it. So they took the high road by building a low road and decided to go underneath the harbor instead. Rather than bore a tunnel through the soil and rock below like the Channel Tunnel, the entire thing was prefabricated in sections and installed from the water surface above - a construction technique called immersed tube tunneling. This seems kind of simple...
3 hours ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from Blog - Practical Engineering

When Abandoned Mines Collapse

[Note that this article is a transcript of the video embedded above.] In December of 2024, a huge sinkhole opened up on I-80 near Wharton, New Jersey, creating massive traffic delays as crews worked to figure out what happened and get it fixed. Since then, it happened again in February 2025 and then again in March. Each time, the highway had to be shut down, creating a nightmare for commuters who had to find alternate routes. And it’s a nightmare for the DOT, too, trying to make sure this highway is safe to drive on despite it literally collapsing into the earth. From what we know so far, this is not a natural phenomenon, but one that’s human-made. It looks like all these issues were set in motion more than a century ago when the area had numerous underground iron mines. This is a really complex issue that causes problems around the world, and I built a little model mine in my garage to show you why it’s such a big deal. I’m Grady and this is Practical Engineering. We’ve been extracting material and minerals from the earth since way before anyone was writing things down. It’s probably safe to say that things started at the surface. You notice something shiny or differently colored on the side of a hill or cliff and you take it out. Over time, we built up knowledge about what materials were valuable, where they existed, and how to efficiently extract them from the earth. But, of course, there’s only so much earth at the surface. Eventually, you have to start digging. Maybe you follow a vein of gold, silver, copper, coal or sulfur down below the surface. And things start to get more complicated because now you’re in a hole. And holes are kind of dangerous. They’re dark, they fill with water, they can collapse, and they collect dangerous gases. So, in many cases, even today, it makes sense to remove the overburden - the soil and rock above the mineral or material you’re after. Mining on the surface has a lot of advantages when it comes to cost and safety. But there are situations where surface mining isn’t practical. Removing overburden is expensive, and it gets more expensive the deeper you go. It also has environmental impacts like habitat destruction and pollution of air and water. So, as technology, safety, and our understanding of soil and rock mechanics grew, so did our ability to go straight to the source and extract minerals underground. One of the major materials that drove the move to underground mining was coal. It’s usually found in horizontal formations called seams, that formed when vast volumes of paleozoic plants were buried and then crushed and heated over geologic time. At the start of the Industrial Revolution, coal quickly became a primary source of energy for steam engines, steel refining, and electricity generation. Those coal seams vary in thickness, and they vary in depth below the surface too, so many early coal mines were underground. In the early days of underground mining, there was not a lot of foresight. Some might argue that’s still true, but it was a lot more so a couple hundred years ago. Coal mining companies weren’t creating detailed maps of their mines, and even if they did, there was no central archive to send them to. And they just weren’t that concerned about the long-term stability of the mines once the resources had been extracted. All that mattered was getting coal out of the ground. Mining companies came and went, dissolved or were acquired, and over time, a lot of information about where mines existed and their condition was just lost. And even though many mines were in rural areas, far away from major population centers, some weren’t, and some of those rural areas became major population centers without any knowledge about what had happened underneath them decades ago. An issue that confounds the problem of mine subsidence is that in a lot of places, property ownership is split into two pieces: surface rights and mineral rights. And those rights can be owned by different people. So if you’re a homeowner, you may own the surface rights to your land, while a company owns the right to drill or mine under your property. That doesn’t give them the right to damage your property, but it does make things more complicated since you don’t always have a say in what’s happening beneath the surface. There are myriad ways to build and operate underground mines, but especially for soft rock mining, like coal, the predominant method for decades was called “room and pillar”. This is exactly what it sounds like. You excavate the ore, bringing material to the surface. But you leave columns to support the roof. The size, shape, and spacing of columns are dictated by the strength of the material. This is really important because a mine like this has major fixed costs: exploration, planning, access, ventilation, and haulage. It’s important to extract as much as possible, and every column you leave supporting the roof is valuable material you can’t recover. So, there’s often not a lot of margin in these pillars. They’re as small as the company thought they could get away with before they were finished mining. I built a little room and pillar mine in my garage. I’ll be the first to admit that this little model is not a rigorous reproduction of an actual geologic formation. My coal seam is just made of cardboard, and the bright colors are just for fun. But, I’m hoping this can help illustrate the challenges associated with this type of mine. I’ve got a little rainfall simulator set up, because water plays a big role in these processes. This first rainfall isn’t necessarily representative of real life, since it’s really just compacting the loose sand. But it does give a nice image of how subsidence works in general. You can see the surface of the ground sinking as the sand compacts into place. But you can also see that as the water reaches the mine, things start to deform. In a real mine, this is true, too. Stresses in the surrounding soil and rock redistribute over time from long-term movements, relaxation of stresses that were already built up in the materials before extraction, and from water. I ran this model for an entire day, turning the rainfall on and off to simulate a somewhat natural progression of time in the subsurface. By the end of the day, the mine hadn’t collapsed, but it was looking a great deal less stable than when it started. And that’s one big thing you can learn from this model - in a lot of cases, these issues aren’t linearly progressive. They can happen in fits and starts, like this small leak in the roof of the mine. You get a little bit of erosion of soil, but eventually, enough sand built up that it kind of healed itself, and, for a while, you can’t see any evidence of any of it at the surface. The geology essentially absorbed the sinkhole by redistributing materials and stresses so there’s no obvious sign at the surface that anything wayward is happening below. In the US, there were very few regulations on mining until the late 19th century, and even those focused primarily on the safety of the workers. There just wasn’t that much concern about long-term stability. So as soon as material was extracted, mines were abandoned. The already iffy columns were just left alone, and no one wasted resources on additional supports or shoring. They just walked away. One thing that happens when mines are abandoned is that they flood. Without the need to work inside, the companies stop pumping out the water. I can simulate this on my model by just plugging up the drain. In a real soft rock mine, there can be minerals like gypsum and limestone that are soluble in water. Repeated cycles of drying and wetting can slowly dissolve them away. Water can also soften certain materials and soils, reducing their mechanical strength to withstand heavy loads, just like my cardboard model. And then, of course, water simply causes erosion. It can literally carry soil particles with it, again, causing voids and redistribution of stresses in the subsurface. This is footage from an old video I did demonstrating how sinkholes can form. The ways that mine subsidence propagates to the surface can vary a lot, based on the geology and depth of the mine. For collapses near the surface, you often see well-defined sinkholes where the soil directly above the mine simply falls into the void. And this is usually a sudden phenomenon. I flooded and drained my little mine a few times to demonstrate this. Accidentally flooded my little town a few times in the process, but that’s okay. You can see in my model, after flooding the mine and draining it down, there was a partial failure in the roof and a pile of sand toward the back caved in. And on the surface, you see just a small sinkhole. In 2024, a huge hole opened right in the center of a sports complex in Alton, Illinois. It was quickly determined that part of an active underground aggregate mine below the park had collapsed, leading to the sinkhole. It’s pretty characteristic of these issues. You don’t know where they’re going to happen, and you don’t know how the surface soils are going to react to what’s happening underneath. Subsidence can also look like a generalized and broader sinking and settling over a large area. You can see in my model that most of the surface still looks pretty flat, despite the fact that it started here and is now down here as the mine supports have softened and deformed. This can also be the case when mines are deeper in the ground. Even if the collapse is sudden, the subsidence is less dramatic because the geology can shift and move to redistribute the stresses. And the subsidence happens more slowly as the overburden settles into a new configuration. In all cases, the subsidence can extend laterally from the mine, so impacted areas aren’t always directly above. The deeper the mine, the wider the subsidence can be. I ran my little mine demo for quite a few cycles of wet and dry just to see how bad things would get. And I admit I used a little percussion at the end to speed things along. Let’s say this is a simulation of an earthquake on an abandoned mine. [Beat] You can see that by the end of it, this thing has basically collapsed. And take a look at the surface now. You have some defined sinkholes for sure. And you also have just generalized subsidence - sloped and wavy areas that were once level. And you can imagine the problems this can cause. Structures can easily be damaged by differential settlement. Pipes break. Foundations shift and crack. Even water can drain differently than before, causing ponding and even changing the course of rivers and streams for large areas. And even if there are no structures, subsidence can ruin high-value farm land, mess up roads, disrupt habitat, and more. In many cases, the company that caused all the damage is long gone. Essentially they set a ticking time bomb deep below the ground with no one knowing if or when it would go off. There’s no one to hold accountable for it, and there’s very little recourse for property owners. Typical property insurance specifically excludes damage from mine subsidence. So, in some places where this is a real threat, government-subsidized insurance programs have been put in place. Eight states in the US, those where coal mining was most extensive, have insurance pools set up. In a few of those states, it is a requirement in order to own property. The federal government in the US also collects a fee from coal mines that goes into a fund that helps cover reclamation costs of mines abandoned before 1977 when the law went into effect. That federal mining act also required modern mines to use methods to prevent subsidence, or control its effects, because this isn’t just a problem with historic abandoned mines. Some modern underground soft rock mining doesn’t use the room and pillar method but instead a process called longwall mining. Like everything in mining, there are multiple ways to do it. But here’s the basic method: Hydraulic jacks support the roof of the mine in a long line. A machine called a shearer travels along the face of the seam with cutting drums. The cut coal falls onto a conveyor and is transported to the surface. The roof supports move forward into the newly created cavity, intentionally allowing the roof behind them to collapse. It’s an incredibly efficient form of mining, and you get to take the whole seam, rather than leaving pillars behind to support the roof. But, obviously, in this method, subsidence at the surface is practically inevitable. Minimizing the harm that subsidence creates starts just by predicting its extent and magnitude. And, just looking at my model, I think you can guess that this isn’t a very easy problem to solve. Engineers use a mix of empirical information, like data from similar past mining operations, geotechnical data, simplified relationships, and in some cases detailed numerical modeling that accounts for geologic and water movement over time. But you don’t just have to predict it. You also have to measure it to see if your predictions were right. So mining companies use instruments like inclinometers and extensometers above underground mines to track how they affect the surface. I have a whole video about that kind of instrumentation if you want to learn more after this. The last part of that is reclamation - to repair or mitigate the damage that’s been done. And this can vary so much depending on where the mine is, what’s above it, and how much subsidence occurs. It can be as simple as filling and grading land that has subsided all the way to extensive structural retrofits to buildings above a mine before extraction even starts. Sinkholes are often repaired by backfilling with layers of different-sized materials, from large at the bottom to small at top. That creates a filter to keep soil from continuing to erode downward into the void. Larger voids can be filled with grout or even polyurethane foam to stabilize the ground above, reducing the chance for a future collapse. I know coal - and mining in general - can be a sensitive topic. Most of us don’t have a lot of exposure to everything that goes into obtaining the raw resources that make modern life possible. And the things we do see and hear are usually bad things like negative environmental impacts or subsidence. But I really think the story of subsidence isn’t just one of “mining is bad” but really “mining used to be bad, and now it’s a lot better, but there are still challenges to overcome.” I guess that’s the story of so many things in engineering - addressing the difficulties we used to just ignore. And this video isn’t meant to fearmonger. This is a real issue that causes real damages today, but it’s also an issue that a lot of people put a great deal of thought, effort, and ultimately resources into so that we can strike a balance between protection against damage to property and the environment and obtaining the resources that we all depend on.

2 weeks ago 8 votes
When Kitty Litter Caused a Nuclear Catastrophe

[Note that this article is a transcript of the video embedded above.] Late in the night of Valentine’s Day 2014, air monitors at an underground nuclear waste repository outside Carlsbad, New Mexico, detected the release of radioactive elements, including americium and plutonium, into the environment. Ventilation fans automatically switched on to exhaust contaminated air up through a shaft, through filters, and out to the environment above ground. When filters were checked the following morning, technicians found that they contained transuranic materials, highly radioactive particles that are not naturally found on Earth. In other words, a container of nuclear waste in the repository had been breached. The site was shut down and employees sent home, but it would be more than a year before the bizarre cause of the incident was released. I’m Grady, and this is Practical Engineering. The dangers of the development of nuclear weapons aren’t limited to mushroom clouds and doomsday scenarios. The process of creating the exotic, transuranic materials necessary to build thermonuclear weapons creates a lot of waste, which itself is uniquely hazardous. Clothes, tools, and materials used in the process may stay dangerously radioactive for thousands of years. So, a huge part of working with nuclear materials is planning how to manage waste. I try not to make predictions about the future, but I think it’s safe to say that the world will probably be a bit different in 10,000 years. More likely, it will be unimaginably different. So, ethical disposal of nuclear waste means not only protecting ourselves but also protecting whoever is here long after we are ancient memories or even forgotten altogether. It’s an engineering challenge pretty much unlike any other, and it demands some creative solutions. The Waste Isolation Pilot Plant, or WIPP, was built in the 1980s in the desert outside Carlsbad, New Mexico, a site selected for a very specific reason: salt. One of the most critical jobs for long-term permanent storage is to keep radioactive waste from entering groundwater and dispersing into the environment. So, WIPP was built inside an enormous and geologically stable formation of salt, roughly 2000 feet or 600 meters below the surface. The presence of ancient salt is an indication that groundwater doesn’t reach this area since the water would dissolve it. And the salt has another beneficial behavior: it’s mobile. Over time, the walls and ceilings of mined-out salt tend to act in a plastic manner, slowly creeping inwards to fill the void. This is ideal in the long term because it will ultimately entomb the waste at WIPP in a permanent manner. It does make things more complicated in the meantime, though, since they have to constantly work to keep the underground open during operation. This process, called “ground control,” involves techniques like drilling and installing roof bolts in epoxy to hold up the ceilings. I have an older video on that process if you want to learn more after this. The challenge in this case is that, eventually, we want the roof bolts to fail, allowing a gentle collapse of salt to fill the void because it does an important job. The salt, and just being deep underground in general, acts to shield the environment from radiation. In fact, a deep salt mine is such a well-shielded area that there’s an experimental laboratory located in WIPP across on the other side of the underground from the waste panels where various universities do cutting-edge physics experiments precisely because of the low radiation levels. The thousands of feet of material above the lab shield it from cosmic and solar radiation, and the salt has much lower levels of inherent radioactivity than other kinds of rock. Imagine that: a low-radiation lab inside a nuclear waste dump. Four shafts extend from the surface into the underground repository for moving people, waste, and air into and out of the facility. Room-and-pillar mining is used to excavate horizontal drifts or panels where waste is stored. Investigators were eventually able to re-enter the repository and search for the cause of the breach. They found the source in Panel 7, Room 7, the area of active disposal at the time. Pressure and heat had burst a drum, starting a fire, damaging nearby containers, and ultimately releasing radioactive materials into the air. On activation of the radiation alarm, the underground ventilation system automatically switched to filtration mode, sending air through massive HEPA filters. Interestingly, although they’re a pretty common consumer good now, High Efficiency Particulate Air, or HEPA, filters actually got their start during the Manhattan Project specifically to filter radionuclides from the air. The ventilation system at WIPP performed well, although there was some leakage past the filters, allowing a small percentage of radioactive material to bypass the filters and release directly into the atmosphere at the surface. 21 workers tested positive for low-level exposure to radioactive contamination but, thankfully, were unharmed. Both WIPP and independent testing organizations confirmed that detected levels were very low, the particles did not spread far, and were extremely unlikely to result in radiation-related health effects to workers or the public. Thankfully, the safety features at the facility worked, but it would take investigators much longer to understand what went wrong in the first place, and that involved tracing that waste barrel back to its source. It all started at the Los Alamos National Laboratory, one of the labs created as part of the 1940s Manhattan Project that first developed atomic bombs in the desert of New Mexico. The 1970s brought a renewed interest in cleaning up various Department of Energy sites. Los Alamos was tasked with recovering plutonium from residue materials left over from previous wartime and research efforts. That process involved using nitric acid to separate plutonium from uranium. Once plutonium is extracted, you’re left with nitrate solutions that get neutralized or evaporated, creating a solid waste stream that contains residual radioactive isotopes. In 1985, a volume of this waste was placed in a lead-lined 55-gallon drum along with an absorbent to soak up any moisture and put into temporary storage at Los Alamos, where it sat for years. But in the summer of 2011, the Las Conchas wildfire threatened the Los Alamos facility, coming within just a few miles of the storage area. This actual fire lit a metaphorical fire under various officials, and wheels were set into motion to get the transuranic waste safely into a long-term storage facility. In other words, ship it down the road to WIPP. Transporting transuranic wastes on the road from one facility to another is quite an ordeal, even when they’re only going through the New Mexican desert. There are rules preventing the transportation of ignitable, corrosive, or reactive waste, and special casks are required to minimize the risk of radiological release in the unlikely event of a crash. WIPP also had rules about how waste can be packaged in order to be placed for long-term disposal called the Waste Acceptance Criteria, which included limits on free liquids. Los Alamos concluded that barrel didn’t meet the requirements and needed to be repackaged before shipping to WIPP. But, there were concerns about which absorbent to use. Los Alamos used various absorbent materials within waste barrels over the years to minimize the amount of moisture and free liquid inside. Any time you’re mixing nuclear waste with another material, you have to be sure there won’t be any unexpected reactions. The procedure for repackaging nitrate salts required that a superabsorbent polymer be used, similar to the beads I’ve used in some of my demos, but concerns about reactivity led to meetings and investigations about whether it was the right material for the job. Ultimately, Los Alamos and their contractors concluded that the materials were incompatible and decided to make a switch. In May 2012, Los Alamos published a white paper titled “Amount of Zeolite Required to Meet the Constraints Established by the EMRTC Report RF 10-13: Application of LANL Evaporator Nitrate Salts.” In other words, “How much kitty litter should be added to radioactive waste?” The answer was about 1.2 to 1, inorganic zeolite clay to nitrate salt waste, by volume. That guidance was then translated into the actual procedures that technicians would use to repackage the waste in gloveboxes at Los Alamos. But something got lost in translation. As far as investigators could determine, here’s what happened: In a meeting in May 2012, the manager responsible for glovebox operations took personal notes about this switch in materials. Those notes were sent in an email and eventually incorporated into the written procedures: “Ensure an organic absorbent is added to the waste material at a minimum of 1.5 absorbent to 1 part waste ratio.” Did you hear that? The white paper’s requirement to use an inorganic absorbent became “...an organic absorbent” in the procedures. We’ll never know where the confusion came from, but it could have been as simple as mishearing the word in the meeting. Nonetheless, that’s what the procedure became. Contractors at Los Alamos procured a large quantity of Swheat Scoop, an organic, wheat-based cat litter, and started using it to repackage the nitrate salt wastes. Our barrel first packaged in 1985 was repackaged in December 2013 with the new kitty litter. It was tested and certified in January 2014, shipped to WIPP later that month, and placed underground. And then it blew up. The unthinkable had happened; the wrong kind of kitty litter had caused a nuclear disaster. While the nitrates are relatively unreactive with inorganic, mineral-based zeolite kitty litter that should have been used, the organic, carbon-based wheat material could undergo oxidation reactions with nitrate wastes. I think it’s also interesting to note here that the issue is a reaction that was totally unrelated to the presence of transuranic waste. It was a chemical reaction - not a nuclear reaction - that caused the problem. Ultimately, the direct cause of the incident was determined to be “an exothermic reaction of incompatible materials in LANL waste drum 68660 that led to thermal runaway, which resulted in over-pressurization of the drum, breach of the drum, and release of a portion of the drum’s contents (combustible gases, waste, and wheat-based absorbent) into the WIPP underground.” Of course, the root cause is deeper than that and has to do with systemic issues at Los Alamos and how they handled the repackaging of the material. The investigation report identified 12 contributing causes that, while individually did not cause the accident, increased the likelihood or severity of it. These are written in a way that is pretty difficult for a non-DOE expert to parse: take a stab at digesting contributing cause number 5: “Failure of Los Alamos Field Office (NA-LA) and the National Transuranic (TRU) Program/Carlsbad Field Office (CBFO) to ensure that the CCP [that is, the Central Characterization Program] and LANS [that is, that is the contractor, Los Alamos National Security] complied with Resource Conservation and Recovery Act (RCRA) requirements in the WIPP Hazardous Waste Facility Permit (HWFP) and the LANL HWFP, as well as the WIPP Waste Acceptance Criteria (WAC).” Still, as bad as it all seems, it really could have been a lot worse. In a sense, WIPP performed precisely how you’d want it to in such an event, and it’s a really good thing the barrel was in the underground when it burst. Had the same happened at Los Alamos or on the way to WIPP, things could have been much worse. Thankfully, none of the other barrels packaged in the same way experienced a thermal runaway, and they were later collected and sealed in larger containers. Regardless, the consequences of the “cat-astrophe” were severe and very expensive. The cleanup involved shutting down the WIPP facility for several years and entirely replacing the ventilation system. WIPP itself didn’t formally reopen until January of 2017, nearly three full years after the incident, with the cleanup costing about half a billion dollars. Today, WIPP remains controversial, not least because of shifting timelines and public communication. Early estimates once projected closure by 2024. Now, that date is sometime between 2050 and 2085. And events like this only add fuel to the fire. Setting aside broader debates on nuclear weapons themselves, the wastes these weapons generate are dangerous now, and they will remain dangerous for generations. WIPP has even explored ideas on how to mark the site post-closure, making sure that future generations clearly understand the enduring danger. Radioactive hazards persist long after languages and societies may have changed beyond recognition, making it essential but challenging to communicate clearly about risks. Sometimes, it’s easy to forget - amidst all the technical complexity and bureaucratic red tape that surrounds anything nuclear - that it’s just people doing the work. It’s almost unbelievable that we entrust ourselves - squishy, sometimes hapless bags of water, meat, and bones - to navigate protocols of such profound complexity needed to safely take advantage of radioactive materials. I don’t tell this story because I think we should be paralyzed by the idea of using nuclear materials - there are enormous benefits to be had in many areas of science, engineering, and medicine. But there are enormous costs as well, many of which we might not be aware of if we don’t make it a habit to read obscure government investigation reports. This event is a reminder that the extent of our vigilance has to match the permanence of the hazards we create.

a month ago 28 votes
Why Are Beach Holes So Deadly?

[Note that this article is a transcript of the video embedded above.] Even though it’s a favorite vacation destination, the beach is surprisingly dangerous. Consider the lifeguard: There aren’t that many recreational activities in our lives that have explicit staff whose only job is to keep an eye on us, make sure we stay safe, and rescue us if we get into trouble. There are just a lot of hazards on the beach. Heavy waves, rip currents, heat stress, sunburn, jellyfish stings, sharks, and even algae can threaten the safety of beachgoers. But there’s a whole other hazard, this one usually self-inflicted, that usually doesn’t make the list of warnings, even though it takes, on average, 2-3 lives per year just in the United States. If you know me, you know I would never discourage that act of playing with soil and sand. It’s basically what I was put on this earth to do. But I do have one exception. Because just about every year, the news reports that someone was buried when a hole they dug collapsed on top of them. There’s no central database of sandhole collapse incidents, but from the numbers we do have, about twice as many people die this way than from shark attacks in the US. It might seem like common sense not to dig a big, unsupported hole at the beach and then go inside it, but sand has some really interesting geotechnical properties that can provide a false sense of security. So, let’s use some engineering and garage demonstrations to explain why. I’m Grady and this is Practical Engineering. In some ways, geotechnical engineering might as well be called slope engineering, because it’s a huge part of what they do. So many aspects of our built environment rely on the stability of sloped earth. Many dams are built from soil or rock fill using embankments. Roads, highways, and bridges rely on embankments to ascend or descend smoothly. Excavations for foundations, tunnels, and other structures have to be stable for the people working inside. Mines carefully monitor slopes to make sure their workers are safe. Even protecting against natural hazards like landslides requires a strong understanding of geotechnical engineering. Because of all that, the science of slope stability is really deeply understood. There’s a well-developed professional consensus around the science of soil, how it behaves, and how to design around its limitations as a construction material. And I think a peek into that world will really help us understand this hazard of digging holes on the beach. Like many parts of engineering, analyzing the stability of a slope has two basic parts: the strengths and the loads. The job of a geotechnical engineer is to compare the two. The load, in this case, is kind of obvious: it’s just the weight of the soil itself. We can complicate that a bit by adding loads at the top of a slope, called surcharges, and no doubt surcharge loads have contributed to at least a few of these dangerous collapses from people standing at the edge of a hole. But for now, let’s keep it simple with just the soil’s own weight. On a flat surface, soils are generally stable. But when you introduce a slope, the weight of the soil above can create a shear failure. These failures often happen along a circular arc, because an arc minimizes the resisting forces in the soil while maximizing the driving forces. We can manually solve for the shear forces at any point in a soil mass, but that would be a fairly tedious engineering exercise, so most slope stability analyses use software. One of the simplest methods is just to let the software draw hundreds of circular arcs that represent failure planes, compute the stresses along each plane based on the weight of the soil, and then figure out if the strength of the soil is enough to withstand the stress. But what does it really mean for a soil to have strength? If you can imagine a sample of soil floating in space, and you apply a shear stress, those particles are going to slide apart from each other in the direction of the stress. The amount of force required to do it is usually expressed as an angle, and I can show you why. You may have done this simple experiment in high school physics where you drag a block along a flat surface and measure the force required to overcome the friction. If you add weight, you increase the force between the surfaces, called the normal force, which creates additional friction. The same is true with soils. The harder you press the particles of soil together, the better they are at resisting a shear force. In a simplified force diagram, we can draw a normal force and the resulting friction, or shear strength, that results. And the angle that hypotenuse makes with the normal force is what we call the friction angle. Under certain conditions, it’s equal to the angle of repose, the steepest angle that a soil will naturally stand. If I let sand pour out of this funnel onto the table, you can see, even as the pile gets higher, the angle of the slope of the sides never really changes. And this illustrates the complexity of slope stability really nicely. Gravity is what holds the particles together, creating friction, but it’s also what pulls them apart. And the angle of repose is kind of a line between gravity’s stabilizing and destabilizing effects on the soil. But things get more complicated when you add water to the mix. Soil particles, like all things that take up space, have buoyancy. Just like lifting a weight under water is easier, soil particles seem to weigh less when they’re saturated, so they have less friction between them. I can demonstrate this pretty easily by just moving my angle of repose setup to a water tank. It’s a subtle difference, but the angle of repose has gone down underwater. It’s just because the particle’s effective weight goes down, so the shear strength of the soil mass goes down too. And this doesn’t just happen under lakes and oceans. Soil holds water - I’ve covered a lot of topics on groundwater if you want to learn more. There’s this concept of the “water table” below which, the soils are saturated, and they behave in the same way as my little demonstration. The water between the particles, called “pore water” exerts pressure, pushing them away from one another and reducing the friction between them. Shear strength usually goes down for saturated soils. But, if you’ve played with sand, you might be thinking: “This doesn’t really track with my intuitions.” When you build a sand castle, you know, the dry sand falls apart, and the wet sand holds together. So let’s dive a little deeper. Friction actually isn’t the only factor that contributes to shear strength in a soil. For example, I can try to shear this clay, and there’s some resistance there, even though there is no confining force pushing the particles together. In finer-grained soils like clay, the particles themselves have molecular-level attractions that make them, basically, sticky. The geotechnical engineers call this cohesion. And it’s where sand gets a little sneaky. Water pressure in the pores between particles can push them away from each other, but it can also do the opposite. In this demo, I have some dry sand in a container with a riser pipe to show the water table connected to the side. And I’ve dyed my water black to make it easier to see. When I pour the water into the riser, what do you think is going to happen? Will the water table in the soil be higher, lower, or exactly the same as the level in the riser? Let’s try it out. Pretty much right away, you can see what happens. The sand essentially sucks the water out of the riser, lifting it higher than the level outside the sand. If I let this settle out for a while, you can see that there’s a pretty big difference in levels, and this is largely due to capillary action. Just like a paper towel, water wicks up into the sand against the force of gravity. This capillary action actually creates negative pressure within the soil (compared to the ambient air pressure). In other words, it pulls the particles against each other, increasing the strength of the soil. It basically gives the sand cohesion, additional shear strength that doesn’t require any confining pressure. And again, if you’ve played with sand, you know there’s a sweet spot when it comes to water. Too dry, and it won’t hold together. Too wet, same thing. But if there’s just enough water, you get this strengthening effect. However, unlike clay that has real cohesion, that suction pressure can be temporary. And it’s not the only factor that makes sand tricky. The shear strength of sand also depends on how well-packed those particles are. Beach sand is usually well-consolidated because of the constant crashing waves. Let’s zoom in on that a bit. If the particles are packed together, they essentially lock together. You can see that to shear them apart doesn’t just look like a sliding motion, but also a slight expansion in volume. Engineers call this dilatancy, and you don’t need a microscope to see it. In fact, you’ve probably noticed this walking around on the beach, especially when the water table is close to the surface. Even a small amount of movement causes the sand to expand, and it’s easy to see like this because it expands above the surface of the water. The practical result of this dilatant property is that sand gets stronger as it moves, but only up to a point. Once the sand expands enough that the particles are no longer interlocked together, there’s a lot less friction between them. If you plot movement, called strain, against shear strength, you get a peak and then a sudden loss of strength. Hopefully you’re starting to see how all this material science adds up to a real problem. The shear strength of a soil, basically its ability to avoid collapse, is not an inherent property: It depends on a lot of factors; It can change pretty quickly; And this behavior is not really intuitive. Most of us don’t have a ton of experience with excavations. That’s part of the reason it’s so fun to go on the beach and dig a hole in the first place. We just don’t get to excavate that much in our everyday lives. So, at least for a lot of us, it’s just a natural instinct to do some recreational digging. You excavate a small hole. It’s fun. It’s interesting. The wet sand is holding up around the edges, so you dig deeper. Some people give up after the novelty wears off. Some get their friends or their kids involved to keep going. Eventually, the hole gets big enough that you have to get inside it to keep digging. With the suction pressure from the water and the shear strengthening through dilatancy, the walls have been holding the entire time, so there’s no reason to assume that they won’t just keep holding. But inside the surrounding sand, things are changing. Sand is permeable to water, meaning water moves through it pretty freely. It doesn’t take a big change to upset that delicate balance of wetness that gives sand its stability. The tide could be going out, lowering the water table and thus drying the soil at the surface out. Alternatively, a wave or the tide could add water to the surface sand, reducing the suction pressure. At the same time, tiny movements within the slopes are strengthening the sand as it tries to dilate in volume. But each little movement pushes toward that peak strength, after which it suddenly goes away. We call this a brittle failure because there’s little deformation to warn you that there’s going to be a collapse. It happens suddenly, and if you happen to be inside a deep hole when it does, you might be just fine, like our little friend here, but if a bigger section of the wall collapses, your chance of surviving is slim. Soil is heavy. Sand has about two-and-a-half times the density of water. It just doesn’t take that much of it to trap a person. This is not just something that happens to people on vacations, by the way. Collapsing trenches and excavations are one of the most common causes of fatal construction incidents. In fact, if you live in a country with workplace health and safety laws, it’s pretty much guaranteed that within those laws are rules about working in trenches and excavations. In the US, OSHA has a detailed set of guidelines on how to stay safe when working at the bottom of a hole, including how steep slopes can be depending on the types of soil, and the devices used to shore up an excavation to keep it from collapsing while people are inside. And for certain circumstances where the risks get high enough or the excavation doesn’t fit neatly into these simplified categories, they require a professional engineer be involved. So does all this mean that anyone who’s not an engineer just shouldn’t dig holes at the beach. If you know me, you know I would never agree with that. I don’t want to come off too earnest here, but we learn through interaction. Soil and rock mechanics are incredibly important to every part of the built environment, and I think everyone should have a chance to play with sand, to get muddy and dirty, to engage and connect and commune with the stuff on which everything gets built. So, by all means, dig holes at the beach. Just don’t dig them so deep. The typical recommendation I see is to avoid going in a hole deeper than your knees. That’s pretty conservative. If you have kids with you, it’s really not much at all. If you want to follow OSHA guidelines, you can go a little bigger: up to 20 feet (or 6 meters) in depth, as long as you slope the sides of your hole by one-and-a-half to one or about 34 degrees above horizontal. You know, ultimately you have to decide what’s safe for you and your family. My point is that this doesn’t have to be a hazard if you use a little engineering prudence. And I hope understanding some of the sneaky behaviors of beach sand can help you delight in the primitive joy of digging a big hole without putting your life at risk in the process.

a month ago 39 votes
This Bridge’s Bizarre Design Nearly Caused It To Collapse

[Note that this article is a transcript of the video embedded above.] This is the Washington Bridge that carries I-195 over the Seekonk River in Providence, Rhode Island… or at least, it was the Washington Bridge. You can see that the westbound span is just about completely gone. In July of 2023, that part of the bridge, although marked as being in poor condition, received a passing inspection. Six months later, the bridge was abruptly closed to traffic because it was in imminent danger of collapse. Now, the whole thing has nearly been torn down as part of an emergency replacement project. Rhode Islanders who need to travel between Providence and East Providence have suffered through more than a year of traffic delays from the loss of this important link, and business owners have seen major downturns. If you live in the area, you’re probably tired of seeing it in the news. But it hasn’t had a lot of coverage outside the state. And I think it’s a really fascinating case study in the complexities of designing, building, and taking care of bridges, including some lessons that apply to designing just about anything. I’m Grady, and this is Practical Engineering. The original bridge over the Seekonk River was finished in 1930. Part of that old bridge now serves as a pedestrian crossing and bike link. It’s a nice bridge: concrete and stone multiple arch spans give it a graceful look over the river. In 1959, when I-195 expanded to include this road, it quickly filled with traffic. The old bridge just wasn’t big enough, at least according to the standards of the time. So, a new bridge to carry the westbound lanes was planned, with the federal government picking up most of the bill. Since the feds were paying, they wanted a simple, inexpensive steel girder bridge. But Rhode Island refused. The state didn’t want a plain, stark, utilitarian structure right next to their historic and elegant multi-arch bridge. It took years to come to an agreement, but eventually, they met in the middle with the Federal Bureau of Roads agreeing to include false concrete arch facades between each of the exterior piers, matching the style of the eastbound bridge. But by that time, the field of bridge engineering had shifted. The Interstate Highway system in the US started in 1956 with the idea of an interconnected freeway system with no at-grade intersections. Every road and rail crossing required grade separation, and that meant we started building a lot of bridges. We’re up to around 55,000 today, and that’s just on the interstates. With steel in short supply, a new kind of bridge girder was coming into vogue made from pre-stressed concrete. In simple reinforced concrete structures, the rebar is just cast inside. It takes some deflection of the concrete before the steel can take on any of the internal stress within the member. For beams, the amount of deflection needed to develop the strength of the steel often leads to cracks, which eventually lead to corrosion as water reaches the steel. But if you can load up the steel before the beam is put into service, in other words, “prestress” it, you can stiffen the beam, making it less likely to crack under load. I have a whole video going into more detail about prestressed concrete if you want to learn more after this. If you’ve already seen it, then you know there are two main ways to do it. In some structures, the reinforcing steel is tensioned before the concrete is cast. This “pre-tensioning” is usually done in facilities with specialized equipment that can apply and hold those extreme forces while the concrete cures. Alternatively, you can do it on-site by running steel tendons through hollow tubes in the concrete. Once it’s cured, jacks are used to stress the tendons, a process called post-tensioning. The engineers for the westbound lanes of the Washington Bridge took advantage of this relatively new construction method, using both post-tensioned and pre-tensioned beams. While most of the grade separation bridges on interstate highways were rigidly standardized, this was a bridge unlike practically any other in the United States. It had 18 spans of varying structural types. Except for the navigation span for boats that used steel girders, the rest of the bridge passing over the water used cantilever beams. Rather than having the end of the beam sit on the pier like most beam bridges do, called simply supported, the primary beams in the Washington Bridge were supported at their center, cantilevering out in both directions. The pre-tensioned drop-in concrete girders were suspended between the cantilever arms. Those cantilever beams were post-tensioned structural members. Five steel cables were run in hollow ducts from one end to the other, then tensioned to roughly 200,000 pounds (nearly a meganewton each), and locked off at anchorages on both ends. Then the ducts were filled with grout to bond the strands to the rest of the concrete member and protect them against corrosion. Most of the cantilever beams in the Washington Bridge were balanced, meaning they had roughly the same load on either side. But at the west abutment and navigation span, that wasn’t true. You can see that these beams support a drop-in girder on one end, but the steel girders over the navigation span are simply-supported on their piers. Since the cantilever beams weren’t balanced, designers needed an alternative way to keep them from rotating atop the pier. So steel rods called tie-downs were installed on each of the unbalanced cantilevers. In December 2023, the now 57-year-old westbound bridge was in the middle of a 64-million-dollar construction project to repair damaged concrete, widen the deck for another lane of traffic, and add a new off-ramp, with the goal of extending the bridge’s life by 25 years. One of the engineers involved in that project was on site and noticed something unusual under the navigation span. Some of the tie-down rods on the unbalanced cantilevers were completely broken. The finding was serious, so three days later, a more detailed inspection of the structure was carried out, discovering that half of the unbalanced cantilevers at piers 6 and 7 - the piers on either side of the navigation span - were not performing as designed. The Rhode Island Department of Transportation closed the bridge to traffic that day while the state could investigate the issue and come up with a solution. The closure snarled traffic on a crossing that was already regularly congested. Westbound traffic was eventually rerouted onto the eastbound bridge, with the lanes narrowed to fit more vehicles. The state put up an interactive dashboard where you can look at travel times by route and time of day and view live webcams to try and help travelers and commuters decide how and when to get across the Seekonk River. Still, the closure has had an enormous impact on the Providence area, impacting travel times and economic activity in the area for more than a year now. The state was fully expecting to implement some kind of emergency repair project, essentially a retrofit that would replace the broken tie-downs on the unbalanced cantilevers. The project was designed, and the contractor started installing work platforms below the bridge in January 2024. As they got access to the underside of the bridge, things started looking worse. Deteriorating concrete on the beams threatened to complicate the installation of the new tie-downs, so the state decided to do a more detailed investigation. They tested concrete in the beams, used ground penetrating radar and ultrasound to inspect the tendons inside, and even drilled into the beams to observe the actual condition of the post-tensioned cables. What they uncovered was a laundry list of serious issues. In addition to the failed tie-down rods, there were major problems with the beams themselves. The concrete was soft and damaged, in part because of freeze-thaw action. Like most concrete from the 1960s, there was no air entrainment in the concrete beams. This requirement in most modern concrete mixes, especially in northern climates, introduces tiny air bubbles that act like cushions to reduce damage when water freezes. Without air, concrete exposed to water and freezing conditions will spall, crack, and deteriorate over time. The post-tensioning system was also in bad condition. The anchorages at the end of the beams were corroded, and voids and soft grout were found within the cable ducts. When the inspectors drilled into the beams to reach one of the cables, they saw that the poor grout job had allowed water inside the duct, corroding the cable itself. Most of the damage was related to the condition and location of the joints in the bridge deck, which allowed water and salty snow melt to leak down onto the structure below. If you saw my video on the Fern Hollow Bridge collapse in Pittsburgh, it was a similar situation. When the engineers analyzed the strength of the bridge, considering its actual condition, the results weren’t good. With no traffic, the beams met the minimum requirements in the bridge code. When traffic loads were applied, it was a totally different story. The code does not allow any tension to occur in a post-tensioned member, but you can see in the graph that the top of the beam is in tension across a large portion of its length. Worse than that, the engineers found that the beams were in a condition where failure would happen before you could see significant cracking in the concrete. In other words, if the beam was in structural distress, it likely wouldn’t be caught during an inspection. There could be no warning before a potential failure. In short, this was not a bridge worth widening. It wasn’t even safe to drive on. A big question here is: Why didn’t any of this get caught in inspections? And that mostly has to do with access. Only some of these tie-downs were visible to inspectors. The rest were embedded in concrete diaphragms that ran laterally between the beams. But it’s not clear if any special attention was paid to them, given their structural importance in the bridge. Looking through all the past inspection reports, there’s very little mention of the tie-down rods at all, and only a few pictures of them. The state actually used this photo from the July 2023 inspection, 5 months prior to when it was observed to be broken, to show that this tie-down wasn’t broken then, suggesting that maybe a large truck had caused the damage in a single event. But you can clearly see that, if it were fractured at that time, that break would be obscured by the pier in the photo. Same thing with this one; the fracture is at the very top of the rod, so it’s impossible to see if it was there in July. There’s no easy way to know how long this had been an issue. At least for these outside tie-rods, you have bare steel, exposed and mostly uncoated, directly beneath a leaky joint in the road deck. This is easy to say in hindsight, but if I’m an inspector and I understand the configuration of this bridge, I’m making sure to put eyes on every one of these visible tie-downs, or at least state clearly and explicitly that the access wasn’t enough to fully document their condition. And it’s even worse for the post-tensioned anchorages in the beams. Those drop-in girders sat essentially flush with the ends of the beams, making it impossible to inspect their condition, let alone perform maintenance or repairs. Seismic retrofits installed in 1996 made access and visibility even tougher. And this is a perfect case study in the risks that hidden elements can pose. If you’ve ever done a renovation project on an older house, you know exactly how this goes. You start to change a light fixture, and next thing you know there’s a backhoe in your front yard. The bridge widening project uncovered the situation with the tie rods. The repairs to the tie rods revealed issues with the post-tension system in the beams. Investigation into that problem revealed further structural issues, and pretty quickly, you have a much bigger problem on your hands than you set out to fix in the first place. You’re trying to keep the public informed about what’s going on and predict how long the bridge is going to be closed at the same time that the situation is unraveling before your eyes. The engineers looked at a bunch of options to repair all these issues, but the complexity of implementing any fixes just made it infeasible. Just to get to the beams, you’d have to demo the entire road deck and remove the drop-in girders. Since things have shifted, there was no way to know how the load had redistributed, so even taking the deck would come with risks. Then, with the state of the concrete in the beams, it wasn’t a sure bet that they could even support any external strengthening. And even if you did get it repaired, you would still have all the same issues with access and visibility. The report put it in plain words: the options for repair were “limited, complex, and [did] not completely mitigate the identified risks with the structure.” So, eventually, the state decided to demolish the entire thing and start over. And that’s where it stands (or doesn’t stand) right now. Demolition is well underway, but that’s not the end of the mess. The state put out a request for proposals to design and build the replacement project in April 2024 with an aggressive schedule to finish construction by August 2026. Not a single contractor bid on the job, likely due to the difficult schedule and the inherent risks. The state planned to leave the substructure of the bridge (the piers and piles) intact, giving the replacement contractor the option to reuse it as a part of their design. It seems that no one could get comfortable with that idea, and I don’t blame them, considering how each milestone in this saga has only revealed new bad news about the condition of the bridge. In October, the state decided to just demo the substructure, too, adding it to the existing contract. They started a new solicitation process, this time with two stages, to try and find a contractor willing to take on this project. The two finalists were announced in December, and they expect to award a contract this summer of 2025. But, in the midst of just trying to figure out what to do with the bridge, the fight over who’s responsible for all this chaos started. In August of 2024, the state filed a lawsuit against 13 companies, including firms that did the bridge inspections, alleging that they should have identified these structural issues earlier. At one point the attorney general stopped the demolition work to preserve evidence for the lawsuit, extending the timeline for a month. Then in January, the US Department of Justice disclosed that they’re investigating the state of Rhode Island under the False Claims Act, which comes into play when federal funds are misused or fraudulently obtained. The dual legal battles—one against the engineering firms and another potentially implicating the state—turned what was already a logistical and financial nightmare into a high-stakes showdown, with millions of dollars and public trust hanging in the balance. Then in February, this video came out showing the demolition contractor dropping huge pieces of the cantilever beams onto the barges below, sparking a workplace safety investigation from OSHA. A fellow YouTube engineer, Casey Jones, has been covering a lot of the more detailed aspects of the situation if you want to keep up with the story, and I also have to shout out the local journalists who have done some fantastic work to keep the public apprised of the situation where maybe the State has faltered. This saga is far from over, and we’re probably going to learn a lot more in the coming months and years. Maybe the inspectors really did neglect their duties to identify major problems. Maybe the state has some issues with its inspection and review program. Probably there’s a little bit of both. But also, this bridge had some bizarre design decisions that made a lot of these problems inevitable. Putting critical structural elements, like tie-downs and post-tension anchorages, where they can’t be inspected or repaired is essentially like planting a time bomb. We’re fortunate it was caught before it blew up. And a lot of those design decisions were driven by a roughly five-million-dollar (adjusted for inflation) battle between Rhode Island and the federal government over the visual appearance of the bridge in 1965. Now, it will cost roughly 20 times that just to tear the bridge down, and who knows how much to rebuild. This situation is a mess! It’s an embarrassment for the state, a nightmare for the engineers and contractors who have worked on the bridge in the past, and a major problem for all the residents of Rhode Island who depend on this bridge. Every time I talk about failures, I get so much feedback about how bad US infrastructure is. And I don’t want to sugarcoat this situation, but I do want to put it in context. This is one of roughly 617,000 bridges in the US, and in some ways, it’s a success story: A serious problem was identified before it became a disaster, and the final outcome should be what was needed all along - replacing a bridge that had reached the end of its design life. It’s not a bizarre situation that an old bridge was old. It happens all the time, and although sometimes the roadwork is frustrating, we generally understand that structures don’t last forever and eventually need to be replaced. But just like engineers design structures to be ductile, to fail with grace and warning, we want and need projects like this to happen in an orderly fashion. We should be able to recognize when replacement is necessary, plan ahead for the project, do a good job informing the public, and execute the job on a timeline that doesn’t require panic, chaos, and emergency contracts, and the Washington Bridge is a perfect case study in why that’s so important.

2 months ago 34 votes

More in science

Healthcare Data Camp Applications Due Today | Out-Of-Pocket

Plus future hackathon ideas?

7 hours ago 2 votes
The wonder of modern drywall

How gypsum changed construction

4 hours ago 1 votes
The Laser Revolution Part I: Megawatt beams to the skies

There’s a laser revolution coming: a time when megawatt-scale beams will radically transform how we produce electricity, conduct war and even upset the nuclear world order. All they have to do it reach a certain convergence of price and power. And by current projections, it will happen in the next two decades.  It’s hard to imagine a world without lasers. They’ve been around since 1960, when a ruby rod managed to produce a few watts of deep red coherent light. The first designs were costly, heavy and incredibly inefficient. But today they are both affordable and powerful, with widespread applications from entertaining light shows to cutting steel to delivering this blog’s content down fiber optic cables. Laser cutters in the 1-10 kW come standard in the automotive and aerospace industry. Soon, we'll have to consider a vastly expanded role for them, with serious consequences. In other words, a laser revolution. In this Part I, we'll describe how laser power vs price is progressing and how techniques are being developed to overcome the obstacles to beaming them through the air, then try to work out what consequences they'll have: first militarily, on the threat of nuclear weapons and how air warfare is conducted. In Part II, we'll continue looking at the consequences of ground and sea warfare, before expanding on the civilian side and the exciting opportunities megawatt lasers will create, from space launch to power generation. Powerful Lasers What exactly is a ‘revolutionary’ laser? It can only be described in relation to current output and price levels. Lasers are getting more power and cheaper at the same time, following a progression that resembles Moore’s law. The sorts of lasers you personally have access to range from milliwatts to kilowatts. The smallest are so cheap and widespread that they can be bought from local stores.  Lockheed Martin's HELSI program wants this box to produce a 500 kW laser You can order a 1 watt laser pointer online for around $150, and a 10 watt laser module for around $350. Kilowatt-scale fiber lasers are advertised at under $1800. Regular businesses can access commercially available 10 kW-class lasers, such as a laser cutting machine that is listed for around $100,000, and 100 kW-class lasers aren’t far away. You can buy a 100 kW CW fibre laser from Raycus. Now. These are all relatively efficient designs with good beam quality, continuous output and operate in near-infrared to visible wavelengths.  Here are rough costs of beam sources by wavelength, from GerritB: Infrared lasers would be around $100/watt, while visible wavelength lasers achieved through frequency doubling or tripling sit above $1000/watt. Lasers as ‘complete packages’, including a power source, cooling, optical train and a mirror to focus them over long distances also exist in the 10 kW scale. Military designs like the Raytheon HELWS H4 fit on the back of a pickup truck and have undergone 25,000 hours of testing, managing up to 15 kW at full power from atop a British Army Wolfhound. Raytheon's palletized laser weapon in the back of a pickup truck.  Rafael’s Iron Beam is a container-sized air defense system with 100 kW output, and its mobile version focuses 50 kW through a 25 cm beam director. DARPA’s HELLADS is developing a 150 kW laser with the goal of 200 W/kg power density and fitting inside 3 m^3, allowing it to be mounted on small vehicles and aircraft. Meanwhile, the US Navy’s HELCAP is testing 300 kW lasers aboard Arleigh Burke destroyers. These are all effective and affordable for their users, which are militaries with big budgets.  The US Army's HEL-TD on an Oshkosh HEMTT truck. A ‘revolutionary’ laser is the next step up: 1 to 10 MW output, with even better efficiency and beam quality, yet more affordable. Megawatt-scale lasers are already expected before 2030 based on development contracts and other reports. An AFRL publication predicted directed energy weapons in the 100 - 1000 MW range by 2060, so this is on the right track. We’re looking at current trends to determine when they will acquire revolutionary qualities. Here’s what they look like: Rapid decreases in $/W are expected in the next decades. This chart gives us exponential fits we could use. A flattening curve is more realistic, but it's still rapid progression. In fact, the progression of laser brilliance has been compared to Moore’s law for the number of transistors on a chip: From these trends, it looks like lasers will become roughly 100 times cheaper per watt by 2045. If you believe this timeline is too aggressive, then add 5, 10, 15 years to the estimate and you’ll find the conclusions of the rest of the post will remain the same. Regardless, it means 1 MW of raw infrared diode laser output will have a price on the order of $10,000, while visible wavelength lasers would be several tens of thousands of dollars. Increasing beam quality or shortening the wavelength will cost more, but costs remain within that order of magnitude. Achieving this might require combining 1000 fiber-laser modules of 1 kW each, a ten-fold improvement over the roughly 100 module coherent beam combining possible today. Experimental set-up for combining 100 beams. ‘Full package’ lasers as described above will likely match appropriate cooling equipment and correctly sized optics to the increased laser power, but they won’t see a 100x price decrease. The laser generator inside can be compared to the engine of a car: an essential component that contributes significantly to the cost of the full vehicle, but cannot eliminate the price tag on its own.  AFRL's roadmap for laser weapons. Military equipment is very expensive. Existing devices that can track a rapidly moving target and point a laser at it, like a LITENING pod with its 10 cm aperture and many sensors, costs $3 million. The newer 15 cm Sniper Advanced Targeting Pod has been sold in contracts for $3.3 million each. Turkey’s equivalent ASELPOD goes for $1.5 million. The 'Sniper' ATP. A 2024 congressional report on shipboard solid-state lasers for the US Navy estimates that a 60 kW laser weapon costs $100 million, while a 250 kW weapon would reach $200 million. These are within the cost bracket of existing kinetic (gun, missile) based weapon systems, so their only advantage is that their ‘ammunition’ is electricity instead of expensive missiles (one SM-6 interceptor missile is over $4.8m). The report suggests that the cost of a ‘full package’ laser is not strongly tied to the beam power; by its estimates, a 4x more powerful weapon is less than 2x as expensive. Based only on this sort of data, it’s more likely that 1 - 10 MW lasers will remain very expensive even as their laser generating components get much cheaper, allowing them to increase their output. For example, today’s $100 million design that outputs 100 kW might still cost $100 million in 2030, but output 1 MW. Everyone gets a laser. On the other hand, lasers are clearly a technology that is still developing rapidly, leaving an immature early phase where they’re very expensive and progressing in leaps and bounds to a settled status where only incremental improvements remain. We mustn't forget how much progress can be made in 20 years of rapid development. Aviation progress in the 60s, which resulted in the XB-70, could be the model for today's lasers. In 1945, the first P-80 Shooting Stars were produced for the USAAF. At 956 km/h, their engines had an output of 5.3 MW each. In 1965, the XB-70 Valkyrie broke Mach 3. Its six engines had a combined output of 662 MW at 3310 km/h, making each engine 21 times more powerful than the one on the P-80. Meanwhile, the commercial aviation industry had access to Boeing 727s with 3x17 MW engines or The Vickers VC10 with 4x24 MW engines. A PowerLight beaming demonstration, one of the few long-range laser developments with near-term civilian uses. A look at commercial equipment paints a more promising picture. Heat exchangers, coolant pumps, power handling equipment and large mirrors on precision mountings are not seeing the same dramatic price drops year after year as laser generators do, but they are making relatively rapid progress.  For example, when it comes to electrical power handling, the PNNL Grid Energy Storage Technology Cost and Performance Assessment from 2022 placed rectifier plus inverter costs at only $0.123/Watt. This figure is for a fixed installation, so a mobile version would cost more. Commercially available ‘deluxe’ 28 cm telescopes with a robotic mount and computer control come in under $8000. How much effort is needed to turn this into a laser mount? A half-meter telescope can cost a few tens of thousands of dollars. A meter-wide mirror in its mount is around $100,000 while a whole astronomy-grade observatory with tracking motors is more like $250,000 to $500,000. A "low-cost, 0.5 meter, robotic telescope" for DEMONEX. A laser might need special low-expansion glass like ZERODUR with a cooling system attached, which would raise the cost of a full mount with a meter-wide mirror by up to an order of magnitude. ZERODUR low thermal expansion glass. However, if efforts like Trex/ABT’s attempt to reduce the cost of telescope-grade mirrors to $100,000/m^2 by using diffusion-bonded (no adhesive) CVC silicon carbide instead of traditionally machined and polished glass are successful, then the costs wouldn’t rise so much. They would instead start to fit pre-existing scaling laws.  So, based on these commercial figures, a 1 megawatt laser generator paired with a robotic mount, large low-expansion mirror, sufficient cooling and power-handling modules adds up to around $1 million in the near future, or at worst $10 million. This excludes the power source, which depends on the laser’s intended use.  In summary, a pessimistic progression for 1 MW lasers would place them in the $100 million bracket by 2045, an optimistic one would have them under $1 million, while a realistic one would be somewhere around $10 million. At that price, we obtain something more than a mere improvement over current lasers - it’ll be revolutionary. Beaming Megawatts Bad weather conditions can render today's lasers difficult to use. Powerful lasers are an oft-visited topic at ToughSF. However, they are usually considered for use in space, where their beams travel through a vacuum. It allows us to basically ignore what happens to the beam as it travels between its focusing mirror and its destination. The diffraction equation (spot size = 1.27 x wavelength x target distance / mirror diameter) tells us almost everything we need to know, so maximizing beam range and effectiveness means simply looking for the beam with the shortest wavelength focused by the largest mirror possible. For lasers inside the atmosphere, there are other factors that cannot be ignored. There are at least nine types of beam-air interaction, including two-photon absorption, stimulated scattering, ionization, cascade breakdown and filamentation. Thankfully, most of these are only relevant to very intense lasers or wavelengths considered to be ‘vacuum-only frequencies’, such as X-rays. The megawatt-class lasers of the next decades are expected to have infrared or visible wavelength beams with continuous output, operating far below the intensities needed to tear apart air molecules, so instead we have to deal with thermal blooming, both types of attenuation and twinkling. Thermal blooming A powerful beam travelling through the atmosphere will heat up a channel of air along its path. Hot air has a lower density than cold air. Just like a mirage in a desert is the result of hot air bending light, a channel of hot air will act like a lens that de-focuses a laser travelling through it. The more intense the beam and the longer it heats the air, the stronger the de-focusing effect. The simplest solution to thermal blooming is to reduce the lasing time. A short burst of power doesn’t heat up the air so much. The continuous-wave lasers of the next decades might not have the capability to concentrate their power into pulses, or we may need them to keep beaming for extended periods, so this isn’t always an applicable solution. Another simple solution is to let the beam wander in circles, so it is always moving out of its own hot air channel into fresh air. This is great if the target is also moving, but not so great if the beam must remain focused on a single spot.  How much will this affect a powerful laser? There are equations to estimate the level of distortion. We find that for a visible or near-infrared laser of 1 MW focused by a 1 meter diameter mirror, focused onto a 1 cm spot 50 kilometers away, the effect of thermal blooming can be ignored. For lasers ten times more powerful, we must counter the blooming with linear adaptive optics. How adaptive optics work. It’s 100 MW lasers and beyond that need additional corrective actions, or hope for a slight wind to help clear their hot air channel.       Twinkling  Stars twinkle because their light is distorted as it travels down through the turbulent atmosphere. Lasers twinkle too when the medium they travel through moves randomly and deflects the beam.  Astronomers found a solution to provide clear images to their telescopes. They use adaptive optics that detect the level of distortion in the light being received with a wavefront sense, then bend their mirror accordingly to negate those distortions.  Lasers can also use adaptive optics, to correct twinkling and many other types of distortion.  Attenuation from atmospheric absorption This sort of attenuation is caused by the air absorbing light passing through it. Our terrestrial mix of oxygen, nitrogen, carbon dioxide and water vapour is extremely unfriendly to wavelengths shorter than UV. Water vapour makes many infrared wavelengths unsuitable as well. Taking these into account gives us ‘transmission windows’ that are ideal for a laser to exploit. Here’s a chart: You want to minimize laser divergence to increase its range and form a smaller spot with the beam at its target, so the ideal laser uses the shortest wavelength within these transmission windows. Agatha’s analysis suggests that 400 nm lasers (cyan) are the best for going through an atmosphere from top to bottom. Deep blue lasers seem to be optimal. However, a practical laser may choose to sacrifice some performance in ideal conditions to get some better ability to handle water vapour. Weather conditions like cloud cover or fog can place a lot of water in the beam’s path. The more water the laser is expected to encounter, the more interest there is in a green laser (around 500 nm) rather than a blue one, as that is the wavelength that gets through water the best. Going through water imposes another constraint. Other practical considerations include the nature of the laser generator; a CO2 laser may only offer long infrared 9600 or 10600 nanometer wavelength beams. A modern diode-pumped solid state laser using a GaAlAs diode and Nd:YAG lasing crystal produces a 1064 nm beam, which is commonly frequency-doubled to 532 nm (this is where we get green laser pointers), which is slightly longer than the 500 nm optimal for penetrating water.  Let’s try to estimate the effect of this type of attenuation. Chart from the Galactic Library, by Luke Campbell! In dry air, a 500 nm beam has an absorption length in the tens of millions of kilometers. It means the laser has to travel that distance through air to lose 63% of its power. Adding in 1% water by volume (corresponding to 60% relative humidity), this length decreases to a few thousands of kilometers. Earth’s atmosphere is 100 kilometers deep vertically, close to 1000 km deep tangentially from the horizon. So, we can ignore this type of attenuation for green lasers. A deep red or near-infrared laser fares much worse, with an absorption length as short as 10 km. That means it will lose 86% of its power after travelling two absorption lengths, or 20 km. A laser with a short absorption length suffers the double trouble of more intense thermal blooming, as the air along its path is more easily heated up. Attenuation from aerosols If you can see a laser beam, then it means the beam is losing energy to scattered light. For air- and water-penetrating wavelengths, the attenuation caused by various small particles in the atmosphere, such as water droplets and dust, is much more relevant.  The effect is very difficult to estimate because of the variety of conditions that can exist. A general rule of thumb to follow is that if your sensors can see a target, then a laser can reach it too. This is especially true if the main focusing optics for the laser also serves to collect light for the sensor. If your sensors cannot get a good image of the target, then a laser won’t reach it easily either. A useful estimate for how much aerosols affect visible wavelength lasers is the meteorological visibility scale: it can range from perfectly clear conditions where visibility exceeds 50 km, to dense fog where visibility is less than 50 meters. A visible wavelength laser would have the same effective range as this visibility scale. Empirical testing for how lasers traverse various weather conditions has been done. Balloon and searchlight data at 550 nm gives a wide range of attenuation coefficients: We see on the chart that at ground level, aerosol attenuation coefficient is roughly 0.01/km, meaning that traversing 20 km saps away 1 - e^(- 20 x 0.01) = 0.181 or 18% of the laser energy.  Transmitting 550 nm lasers across in Chesapeake Bay in humid conditions, a distance of 5.5 to 16.25 km, leads to losses of 50 to 70% of the original beam power: A more modern study of laser communications finds an attenuation coefficient as high as 0.04 at 500 nm near the ground, so across 20 km, this is 55% of the beam being lost to aerosols. Meanwhile, a LIDAR study gives data on transmission of different wavelengths through bad weather: Since aerosols scatter the laser light in all directions, it is difficult or impossible to counter the effects using adaptive optics from the laser source. So it is a major challenge for lasers to overcome.  Are there ways to deal with aerosols?  A proposal to clear fog over airports using hundreds of megawatts of infrared lasers. A brute-force solution is to vaporize all the water in the beam’s path. Turning water droplets into water vapour means there are no more particles that can affect the beam via Mie scattering (from particles close to the scale of the laser wavelength) or Rayleigh scattering (from particles smaller than the laser wavelength). However, boiling water costs a lot of energy. Luke Campbell has this to say: “I find that a cloud has 1 to 4 kg of water per square meter per kilometer thickness, but rarely exceeds 2.5 kg/m^2 per km thickness. Considering only the heat of vaporization, it will take about 5.5 MJ to evaporate a one square meter hole through a kilometer thick cloud. The most extreme cases we will have to deal with include nimbostratus clouds and cumulonimbus clouds. The former tend to be 2 to 3 km thick with extreme examples up to 4.5 km thick, the latter average 2 km in height but in extreme cases can reach 20 km high. This leads to 10 to 15 MJ to burn a one square meter hole through typical heavy rain clouds and thunderstorms, with extremes of 100 MJ to burn a 1 m^2 hole through the highest thunderstorms. Once a tunnel is formed through the cloud, you will need an additional input of power to keep that tunnel clear as wind blows additional cloud droplets into the tunnel. The power required will be the energy needed per square meter to form the cloud tunnel times the wind speed times the tunnel diameter. For a 2 km thunderstorm with 10 m/s winds, a 1 m^2 hole will thus require a power of ~100 MW to keep the tunnel open. 2 km thick heavy rain clouds with 3 m/s winds will require 30 MW to keep the tunnel open. As the radius of the beam increases, the initial energy to form the tunnel scales with the square of the beam diameter, while the power to keep the tunnel open scales linearly with beam diameter.” Based on the above, the upper limit of laser power needed to cut a channel through the worst weather conditions is 100 MW/m^2. If the total laser power available is 1 MW, then it can only vaporize a 0.01 m^2 hole through clouds in its path, which is a circle about 11 cm wide. Limiting the diameter of the beam restricts its range due to the diffraction limit. A beam that’s normally 1 meter in diameter, that’s restricted to 11 cm in diameter, would have a range 9 times shorter.  Thick fog would place similar amounts of water in the beam’s path, but the wind speed would be lower (strong winds break up fog). ‘Regular’ weather consisting of white clouds a few hundred meters thick would still require over a megawatt to clear in a light breeze.  Clouds by type and altitude. Lasers from the next 20 years won’t have the power output to spend 1-100 MW just to clear a channel through clouds. So, their effectiveness will depend on the weather. If a target flies into a large cloud, it cannot be reached by lasers. If thick fog descends on a laser-equipped site, it might be put out of action.  But, there are other options. There have been claims of existing lasers being able to circumvent wind and fog despite only having kW-level outputs.  There are methods to clear a path for lasers through clouds or fog in a much more efficient manner. Two laboratory- or field-tested techniques stand out: -Shattering the water droplets This technique attempts to reduce the size of the water droplets so that they are no longer close to the wavelength of the laser. The light scattering effect from aerosols becomes much weaker once the aerosol and the laser wavelength don’t match up. For example, reducing the droplets to a size 10x smaller decreases the scattering effect by 10,000x! One approach is to use an intense pulsed laser that only vaporizes a portion of the water droplet, turning it into a superheated mass that explodes and destroys the rest of the droplet. This costs much less energy than vaporizing the whole droplet. According to this study, a channel can be cleared through clouds and fog by splitting droplets with at least 7x less energy than fully vaporizing the droplets. Shockwave generation inside droplets by picosecond lasers. Another paper suggests that laser pulses of 0.1 to 6.5 J/cm^2 are enough to shatter droplets across all weather conditions, compared to 33 to 500 J/cm^2 for complete vaporization, meaning that shattering droplets can be 76x to 330x more efficient that the brute-force method. An energy cost 0.8 to 2.5 J/cm^2 is suggested here. Finally, a figure of 1.2 J/cm^2 is said to be enough to clear a channel through clouds by shattering droplets that lasts for half a second, meaning an average power output of 24 kW/m^2 is sufficient.  If we average these results we get an upper end of 50 kW/m^2 for shattering water droplets. This is only 5% of the power output of the 1 MW ‘main beam’, but it must be delivered in the form of short intense pulses. If the long wavelengths described in the papers above (10.6 microns) is also a requirement, then it becomes necessary to deliver these pulses via a separate dedicated mid-infrared laser that is installed parallel to the ‘main beam’ laser. If an intense pulse of any wavelength is enough to produce these effects, then a Q-switch can be added to the ‘main beam’ to give it a pulsed mode of operation.  -Dispersing the droplets with shockwaves A plasma filament generated in air by a femtosecond laser. This technique aims to simply move the water droplets out of the way. Ultra-intense laser pulses can generate self-focusing plasma filaments in air; essentially brightly visible lightning bolts that travel in a straight line for their entire length. These filaments mostly ignore beam divergence or other dispersion effects, and modern techniques are able to extend them into “megafilaments” of dozens of meters, potentially hundreds of meters. They’re also only a few micrometers in diameter, and explode after a few microsecond.  This means it is unrealistic for laser filaments to propagate across kilometers to their target, especially not when channeling ‘main beam’ power of over 1 MW. Instead, their explosive end can be used to generate a pressure wave that sweeps water droplets out of a channel of air surrounding the filament.  Experimental data shows that a 1.3 picosecond laser with pulsed power of 76 GW was able to create plasma filaments in air that were 50 cm long. The shockwave and expanding hot air from the exploding filaments was able to accelerate surrounding water droplets to 60 mm/s, which was enough to clear out a channel through fog if the pulse rate exceeded 1 kHz. However, they only assume a cleared channel width of 100 micrometers. That is far too small to send a ‘main beam’ through.  633nm laser going through a cloud before and after droplet scattering. Another experiment used 0.05 picosecond pulses with a peak power of 100 GW. The Ti:Sapphire laser could generate red (800 nm) or blue (400 nm) wavelengths. The cleared channel was measured to be around 1 to 2 millimeters in diameter (FWHM 1.6 mm in the best case), that lasted for more than 90 milliseconds. It means a laser pulse frequency as low as 10 Hz could be enough to keep the channel open. Still, it is far too small to be useful for a ‘main beam’.  Multiple other sources confirm that channel diameter is in the millimeter range when using 0.1 J-scale pulses. Theoretically, if the cleared channel is a long thin cylinder with the plasma filament at its center, its diameter would scale with the square of the pulse energy. 10,000x the pulse energy would heat up the plasma filament 10,000x more, causing it to expand 100x further. That means 1 kJ pulses could potentially clear out channels a meter wide. If the pulse frequency can also be as low as 10 Hz, then about 10 kW of average laser power is sufficient to clear meter-wide channels. But that is very optimistic, as the channel diameter scaling is likely to have a 3D component (explosions expand in all directions), and the picosecond timescale of these pulses means the laser’s peak power has to be in the 1000 J / 10^-15 s = 10^18 Watt range. This is what a 10^16 Watt laser facility producing 1.5 kJ pulses looks like: You’d need 100 of those facilities. It’s not practical.  There is hope for a practical solution in “Molecular Quantum Wakes”: Without generating plasma nor laser filaments, an acoustic wave is formed to move water droplets out of a wide channel. The laser pulses act on the air itself to create a strong temperature gradient, which launches the acoustic wave. It seems that eight pulses with a total energy of 3.8 mJ are enough to clear a 0.5 mm radius channel that’s 10 cm long. That’s an energy cost of 4.8 kJ/m^2. If the 10 Hz pulse frequency requirements from previous channel-clearing studies holds, and energy requirements scale up by area, then a pulsed laser with 38 kW average power is enough to clear a path for 1m wide ‘main beam’. As before, this can be delivered by a pulsed mode of operation using a Q-switch ‘adaptor’ to the powerful continuous laser.    In the next sections, we try to work out the consequences of powerful yet affordable lasers becoming available in the next 20 years.  Overthrowing the Nuclear Order Missile interception test, at night. We can start with the most dramatic and disruptive effect. Consider a 1 MW laser producing a 532 nm wavelength beam, focused by a 1 meter wide mirror fitted with adaptive optics to counter thermal bloom and twinkling, operating at 50% efficiency once cooling and power handling losses are included. It is fed by 2 MW of electricity.  Accounting for beam jitter and atmospheric interference, it can focus its beam onto a 20 cm diameter spot at 200 km (about 1.5x the diffraction limit). This translates into a spot intensity of 32 MW/m^2 or 3.2 kW/cm^2. The laser damage calculator finds that this is enough to burn through 6 mm/s of aluminium alloy, 1 mm/s of stainless steel or 0.18 mm/s of graphite. Test of the UK's Dragonfire laser. At 50 km, the spot diameter tightens to 5 cm, raising the drilling rate to 8.2 cm/s of aluminium alloys or 0.95 cm/s of stainless steel. At 10 km, these increase again to an astounding 122 cm/s of aluminium alloys or 20 cm/s of stainless steel. The laser would actually prefer to not reduce the spot diameter below 1 cm at closer distances to avoid thermal blooming effects. It would remain a destructive weapon regardless, capable of boring holes all the way through flying targets instead of meekly trying to cut off fins or ignite onboard fuel.  Their ultimate test would be a nuclear attack. From the US, it can be delivered in three ways: a low-altitude cruise missile like the AGM-86B, a bomb from aircraft like the B-2 or F-15E, or the re-entering warhead of an ICBM like the Minuteman III.  An AGM-86B is likely to be detected by an air defence radar as soon as it rises over the horizon, perhaps from 20 km away. B-52H dropping an AGM-86B cruise missile. Travelling at 900 km/h, there is an interception window of 80 seconds. The 1 MW laser would start by cutting through 61 cm of aluminium alloy per second, and its penetration rate increases exponentially from there…. which means it only needs to dwell 3.3 - 16.4 milliseconds on each missile to get through their 2 - 10 mm of aluminium. In fact, if we use the 1-10 kJ/cm^2 “hardness” rating of missiles, we get similarly short dwell times of 3.1 - 31 milliseconds.   That delay is practically insignificant compared to the switching time between targets. If we assume it takes 1 second to switch between targets, and cut off the last kilometer from the engagement as the laser turret may not be able to slew fast enough to track its targets at the short distance, then we get 75 missiles shot down. Internal bay of the B-1 Lancer with rotating rack of cruise missiles. One single $10 million defender, with sufficient sensor infrastructure highlighting its targets, could take out the payload of three B-1 Lancers or nearly four fully-loaded B-52 bombers. Newer, stealthier AGM-158s for the B-52 This forces the use of massively more missiles per attack, or a replacement of the majority of existing cruise missile arsenals by costly stealthy designs like the AGM-158 family. Aircraft find themselves in a worse position. Radar arrays like the S-400’s 1N6E primary search radar might detect an older non-stealthy fighter like the F-15E from a distance of 200 km. In the time the pilot takes to notice their radar warning tone, pull on the stick and start diving to the ground, a 1 MW laser weapon would have drilled through several millimeters of aluminium. If the plane is exposed for three whole seconds at that distance, it would have already exceeded its 10,000 J/cm^2 hardness rating.  A stealthy aircraft like the F-35 fighter or the B-2 bomber might not be detected (or more importantly, tracked!) before they are able to deploy their weapons and turn away. F-35A dropping a B61-12 nuclear bomb from an internal bay. That would prevent them from being engaged by a laser at extreme range. Though, if they encounter a radar site at an unexpected angle, face an advanced infrared or electro-optical sensor, or increase their radar signature when deploying weapons, they they'll be detected, starting a 0.03s (at 20 km) to 3s (at 200 km) clock on their expected lifetime (plus up to 1 second for the laser turret to swing around). And while their platform might be stealthy, nuclear bombs in the air won’t be. A disassembled B61 bomb reveals its steel case isn't very thick Large bombs can have steel casings 25 mm thick, yet it still only takes a 1 MW laser about 0.01 seconds to drill through it from a distance of 10 km. Target switching time dominates again. Even if the B61 bombs are released by a supersonic throw, they’d take about 30 seconds to reach their target, meaning one 1 MW laser defender can take out 29 of them. Toss bombing is seeing use in the Ukraine war. So, the laser weapon forces air-launched nuclear attacks to be carried by expensive stealth platforms, and be fitted into stealth packages themselves. That excludes the existing arsenals of unguided bombs, including the USA’s 950 B61s or Russia’s few hundred non-strategic air-dropped warheads, and severely limits the number of potential launch platforms. There are only 19 B-2 Spirit bombers, for example, and about 300 F-35As, compared to 300 F-15s, 800+ F-16s and 900+ F-18s.   ICBM attack creates the hardest targets. Their MIRV warheads enter the atmosphere at near-orbital velocities and do not slow down much until they hit the ground. Falling stars of destruction. While drifting in space, they can deploy massive numbers of decoys to complicate interception, and might even pre-detonate some nukes at high altitude to mess with radar targeting. A large number of decoys makes it impractical to intercept a nuclear strike in space using missiles. Once they enter the atmosphere however, at an altitude of 100 km, the decoys are separated from the dense warheads and the laser engagement can begin in earnest.  At a 10 degree re-entry angle, the MIRVs traverse 567 km at 7.3 km/s before reaching the ground. At a 60 degree re-entry angle, they only traverse 115 km at 9.6 km/s. This is the range of re-entry trajectories. Re-entry warhead hardness is around 25 kJ/cm^2 to 100 kJ/cm^2. We'll use the higher rating. At 567 km, it takes the 1 MW green laser with a 1m diameter mirror over 115 seconds to accumulate 100 kJ/cm^2 of damage. At 115 km, this is reduced to 4.8 seconds. At around 53 km, the laser is eliminating one warhead per second, and further intercepts are almost entirely limited by the target switching delays.  Spinning and covered in ablative shielding, MIRV warheads are already well protected from lasers. If we work iteratively in 0.1 second steps, and add 1 second of target switching delay each time the laser damage accumulates to 100 kJ/cm^2, then a 1 MW defender can intercept 13 warheads in the 10 degree re-entry scenario, down to 7 warheads in the 60 degree scenario. Within the final 50 kilometers, target switching time by far dominates over the warhead destruction time.  These don’t seem like impressive numbers, but they must be put into perspective: this is accomplished by a defence system that costs as much as a single SM-3 Block IB that can intercept one warhead at best. Lasers can operate indefinitely, putting a minimum threshold of 7-13 nuclear warheads per turret to push an attack through. This defence cannot be depleted by repeated attacks, and the lightspeed beam has a strong advantage against maneuvers meant to throw off kinetic interceptors.  Rafael's Iron Beam operates from a standard-sized self-sufficient container that can be placed anywhere. Theoretically, spending $1 billion on laser defences (with radar already available) would shield any site from nuclear attacks of 700-1300 warheads. That’s nearly all the active nuclear warheads Russia has ready for launch, even after they’re forced to arrive at one location within the same one-minute window. We also find that small increases in the cost of each turret (perhaps by doubling their mirror diameter to 2m and increasing their cost to $12m each) massively increases the number of warheads taken out, by 50% or more. Practically, raising the threshold for a nuclear attack to roughly 100 warheads, at the cost of $100 million, is enough to greatly trouble the largest nuclear powers as they can no longer divide their strike across dozens of targets; they’d have to concentrate their nukes on a few heavily defended locations and thereby become unable to guarantee ‘complete destruction’ of their opponent.  The 'ready to launch' arsenal of nuclear nations. The nuclear capability of smaller nations, like France, the UK, India, Pakistan, Israel and North Korea, who only have a few hundred to a few dozen active warheads, could be countered by laser defences worth $100m or less. As a reference, a single Patriot battery has a domestic cost of $1000m and an export cost of $2500m while an S-400 battery is sold for $1125m. Even if laser anti-ballistic missile defences end up being as expensive as existing missile-based defences, we're dealing with an expendable vs an unlimited system. Both usually come with 32 missiles, which is worth 16-32 intercepts depending on whether warheads are single- or double-targeted. They then have to spend up to an hour reloading. Truck-mounted MEADS air defence radar, costing around $30m. The radar and control elements are about half the cost ($500m) of these air defense batteries, meaning an equivalent a laser defence system with the same elements and total cost but missiles replaced by 1 MW beam turrets would be able to take out 350-500 warheads, and be ready for the next engagement in seconds. Cheaper radar systems would multiply this number. And as we will find out later, protection against nuclear strikes is also excellent defence against conventional attack, and building up laser defences for one purpose grants the other.  However, anti-ICBM laser defences like these would come with limitations. They only cover a single site, so the investment into 1 MW turrets would have to be multiplied for each location that needs protection. They are dependent on sensor systems to find and track their targets: half the cost of the Patriot missile battery is in its radar systems, and multiplying radar sites might not be economically feasible.  Lasers would only serve the 'terminal defence' role. Laser weapons are tied to their power generators and become useless if they are cut off. A mobile application must drag along multiple megawatts of power generating capability for each turret. We discussed how techniques for clearing channels through clouds and fog could become available, but megawatt lasers would still retain a vulnerability to bad weather. Nations could suddenly change from ‘immune to nuclear attack’ to ‘partially exposed’ over the course of hours because of a random thunderstorm or hurricane. It's possible that the level of sensor support needed to make use of laser defences prevents any significant cost saving... The warheads themselves could be fitted with armor to better resist laser beams. It could be an easy retrofit, like an additional cone of ablative material fitted onto the warheads, that serves mainly to extend the firing time needed to take them out at long ranges (100 km+). However, by the time the warheads enter ranges of 50 km or below, the time-to-destruction is measured in milliseconds and additional armor does not meaningfully reduce the total number of warheads destroyed. In fact, raising the amount of damage needed to destroy a warhead from 100 kJ/cm^2 to 300 kJ/cm^2 only reduces the number of warheads eliminated in the harshest 60 degree 9.6 km/s scenario from 7 per turret to 4. Raising it to 600 kJ/cm^2 reduces the number eliminated to 3. It’s an exponential race the attackers will lose to the defenders. Worse, the warheads become heavier, so each ICBM has to be loaded with fewer warheads, further diluting any nuclear strike capabilities. What does this all mean for the Nuclear Order that has kept nuclear-armed nations from engaging in all-out war for the past 80 years? The notorious Plan A simulation. It becomes weaker and less reliable. Most nations would be able to afford laser defences that raise the threshold of nuclear attack to several dozen warheads. Their existence requires entire arsenals to be refreshed, with older portions rendered obsolete decades before their planned end-of-life. Certain avenues of attack, like France’s airborne nuclear strike capability relying on ASMPs carried by Rafales and Mirage 2000Ns, would become totally infeasible. Because France has a nuclear arsenal that cannot entirely destroy its enemies, it must brandish it aggressively. Dispersed submarines who are only able to deliver 32, 48 or 60 warheads per strike would not be effective against defended sites; they’d have to group up and coordinate their strikes, rendering them less flexible and vulnerable to anti-submarine warfare efforts. ICBM arsenals that nuclear nations have spent decades and billions of dollars building up would become ineffective faster than they can be updated. The US is currently engaged in a twenty-year-long replacement of its Minuteman III ICBMs by the LGM-35 Sentinel, which are expected to operate until 2075. There are concerns Russia is unable to maintain its existing nuclear arsenal, let alone rebuild it with advanced missiles. Megawatt-scale lasers pointed at the sky might render this effort pointless long before then. Russia is still counting four-decades-old missiles among its active nuclear arsenal. These are the largest nuclear powers, and they take two to four decades to renew their arsenal, let alone expand it to deal with additional defences… if expansion is even allowed under anti-proliferation treaties.  Updated ICBMs for the laser era would be much larger, so that they can lift heavy warheads coated in thick ablative shielding. Air delivery would remain an option if both the launch platforms and the payloads become stealthy or fast; such as B-21s carrying AGM-158 LRASMs or ‘Dark Eagle’ LRHWs, but they'd be far less numerous than before. The B-21 Raider. Sneakier and more aggressive tactics would be favoured. Nuclear policy will shift towards more confrontational use, more along the lines of French rejection of no-first-strike and Russian threats of tactical deployment. All this is expensive, in dollars, time and political capital.  Less fortunate nuclear powers like Pakistan would feel the most threatened by the arrival of cheap yet powerful missile interception systems. They are the least able to sustain the expense of maintaining their nuclear offensive capabilities. However, countries with moderate military budgets and neighbouring nuclear states would have a lot to gain. For example, Taiwan could render its six largest cities nearly immune to a 100-warhead strike over the course of 5 years, using 6 x ($100 million lasers + $500 million radars) / ($16.5 billion x 5 years) = 4.4% of their military budget. Japan could do it for 1.3%, Australia for 2.1%. Then, in one further year of similar spending but without purchasing new radars, they would quintuple the effectiveness of their laser shields to 500 warheads. China is thought to have only about 400 warheads in an ‘undeployed’ state. So, it could find itself surrounded by nations who can flout its nuclear threat within a couple of years.  Chinese DF-41 ICBMs, capable of carrying 3x 425 kT yield warheads. Overall, weaker nuclear strike capability means a weaker nuclear deterrent, but it is not completely gone. Even the richest nations cannot protect all of their cities and infrastructure without spending billions upon billions of dollars. The political fallout from raising a full-scale anti-ballistic missile shield would be terrible, like starting a bonfire calling for immediate nuclear war. Instead, megawatt-scale lasers are the boiling pot, gradually raising the warhead threshold for nuclear strikes while keeping major nuclear powers vulnerable to severe damage from each other. But there will be consequences. An attempt to map the aftermath of an all-out nuclear strike. Suppose the United States raised a 100-warhead shield over its ten largest cities and ten more significant industrial or military sites, like Port Arthur Refinery in Texas and Eglin Air Force Base in Florida, at the cost of $12 billion. If it tried invading Russia, then Russia could concentrate 1000 warheads onto 5 targets, overmatching their local defences and exacting a terrible cost. The United States would not pay that cost to defeat Russia, so some nuclear deterrence remains. However, if India raised that same shield over its major cities and went to war with Pakistan, the latter’s 170 warheads could only hope to annihilate one Indian city. Perhaps that is a cost someone would be willing to pay to defeat a nuclear rival…  In another scenario, South Korea easily builds a number of laser interceptors that renders its entire territory immune to North Korean ICBMs. By military logic, this forces North Korea to act as soon as the laser turrets start appearing, before its nuclear threat is neutered. In fact, it would be in its interest to spend its nuclear card as soon as possible (either attacking with it or negotiating a disarmament while that still matters) before laser interceptors raise the threshold too far.  In short, megawatt scale lasers used to intercept nuclear strikes will create more openings for international aggression, embolden nuclear states in acting against each other, while also increasing pressure to both expand nuclear weapon arsenals while making them more menacing.  The Air War Lockheed Martin's 300 kW IFPC-HEL demonstrator. With 3x the power, it will take out fighter jets. There are many more military consequences to revolutionary lasers. The effects on aviation would be extreme. Some of this has been discussed in a previous blog post. As suggested in calculations in the previous section, aircraft survivability in the face of 1 MW laser beams focused by 1m diameter mirrors is a few seconds at the extreme range of 200 km. Long-range weapons like the massive Kh-28 or the AGM-88 HARM require the aircraft to come within 40-80 km of their ground target. These are today considered ‘standoff’ weapons, but they’d force aircraft to come to a distance where expected lifetime under laser fire is less than half a second. An EA-18G Growler with 4x AGM-88E missiles. Using shorter ranged weapons, like AGM-65 Mavericks, Kh-29s or any regular bomb like the GBU-24, would require aircraft to enter conditions where they can be cut in half in a literal blink of an eye. Against laser weapons, speed and altitude lose their importance. Instead, stealth must be relied upon to avoid early detection, and advanced munitions that keep aircraft far away from laser defences must be used. This all comes with several drawbacks. For example, the F-35 can only carry two weapons like the Joint Strike Missile while maintaining its own stealth. Laser turrets can take out dozens of incoming munitions each, even if the engagement starts at minimal ranges. There are of course solutions to this dilemma. SPEAR-3 standoff weapons have the best combination of anti-laser traits. Weapons like the SPEAR 3 and GBU-53/B can be carried in great numbers and keep aircraft over 100 km away from laser defences. An F-35 could carry eight of them internally, up to 16 using external hardpoints. They’re not stealthy weapons but they’re not easy to detect either, which might let them slip closer to the laser turrets.  Let’s estimate how many SPEAR 3s a powerful laser could intercept. Amateur analysis suggests the radar cross-section of a SPEAR 3 is 0.03 m^2 frontally, compared to a clean-configuration F-35 that comes as low as 0.005 m^2. If an air defence radar can detect regular aircraft with 4 m^2 radar cross-section at a distance of 300 km, then it can detect the tiny SPEAR 3s at 300 km x (0.03 / 4)^0.25 = 88.3 km. They would be approaching at perhaps 800 km/h, giving the lasers 6.6 minutes to engage them. At 88.3 km, a 1 MW beam would deliver a crippling 1-10 kJ/cm^2 blow to each SPEAR 3 in 0.06 - 0.6 seconds. As they approach, the time to destruction decreases quadratically. So again, we are in a regime where the target switching delay dominates, meaning each laser turret with 1 second of switching time can intercept upwards of 300 missiles. If the SPEAR 3s are ordered to stay low, skim the ground and pop-up on radar just 20 km from their target, losing external guidance and sight of their target on their way, then the lasers would only have 1.5 minutes to intercept them, reducing the number destroyed to around 90 per laser turret.  In practical terms, this means it takes 12 F-35s loaded exclusively with internal air-to-ground weapons to get past one 1 MW turret in the best scenario, or 38+ in a more typical engagement.   Notional rendering of the next-generation F-47. Near-future stealth craft, like the F-47 with bigger internal bays, might carry 16 upgraded small weapons that could approach even closer before being fired upon. The weapons themselves might be very stealthy, detectable only from 10 km away. Under these constraints, a 1 MW turret would destroy only 45 missiles, which can be delivered by three F-47s, or one F-47 leading a couple of YFQ-42/44 drones..  How would air warfare adapt? Militaries are excited about the possibility of lasers countering drone swarms. The laser defenders can specialize themselves. The 1 MW beam focused by 1m mirror is very dangerous to flying targets out to hundreds of kilometers, but it is overkill at shorter distances and is mostly constrained by target switching time against large numbers of projectiles. Alongside the main 1 MW lasers, miniature turrets with smaller mirrors and reduced beam power can be installed. A 250 kW beam at 532 nm wavelength, focused by a 0.5 m diameter mirror, will have a spot diameter of 10 cm (1.5x the diffraction limit) at 50 km distance. The intensity will be 31.8 MW/m^2 or 3.1 kW/cm^2. That means it can defeat flying targets (with 1-10 kJ/cm^2) within 0.32 - 3.2 seconds at 50 km, down to 0.051 - 0.51 seconds at 20 km. Assuming it retains a 1 second target switching time, this turret would be capable of defeating around 170 targets with 10 kJ/cm^2 hardness starting from 50 km away, down to 90 targets from 20 km away. And, it would be around half the cost of a 1 MW 1m turret. In other words, spending $20 million on one big megawatt turret plus two small 250 kW turrets would create a defence worth at least 270 low-flying stealthy targets, compared to just 180 from two megawatt turrets. While the smaller turrets swat away hundreds of incoming missiles, the megawatt turret can keep watch for the launch platform… literally. NASAMS electro-optical sensor for air defence. A 1 meter diameter mirror on a fast moving, accurate mount is actually an awesome telescope. It would have 2-3x the resolution of regular electro-optical and infrared detection systems and 4-9x the light collecting area, supplemented by an integrated adaptive optics system to get rid of atmospheric blur. Just using the main laser mirror as a passive telescope means it can become a very effective long-ranged sensor that does not tip off a target, unlike radar. Even better, it can be turned into a giant searchlight.  Scanning the sky with a low-intensity beam would be an interesting way to turn a laser turret into an active sensor that counters stealth. It would be a 1 megawatt ‘searchlight’ that helps contrast stealth aircraft against their background. Its turret would spin fast enough to cover the entire sky every few seconds, and it could focus its beam onto distant points of interest (acting like a LIDAR) or even poke through clouds to investigate them.  And then what? The Aero-adaptive Aero-optic Beam Control test aboard an AFRL jet.  As mentioned before, a stealthy aircraft with long ranged weaponry would be ideal. Future adaptations would push these advantages further. A jet attacker in a theater where megawatt lasers are present would want to go on prolonged flights while staying very low to the ground. Supersonic speed and maneuverability don’t matter against lightspeed beams, so a subsonic turbofan-propelled design with great endurance and even greater payload capacity is better. Ideally, it can launch its many weapons without ever exposing itself to enemy sensors. However this requires that the precise location of its targets already be known, meaning external information gathering is necessary.  Reconnaissance can be conducted by drones, but these cannot loiter above the battlefield like they do today when lasers can take them down on sight. Today’s militaries are acutely aware of the threat of small disposable drones too, so they would bring along sensors that can effectively find them and target them with laser beams, such as short-wave radars. Take out the eyes! That leaves satellites orbiting overhead and old-school on-the-ground scouting. Low orbit observation satellites, especially the smaller and cheaper kind that fill mega-constellations, would be totally vulnerable to big lasers firing up at them. A 1 MW beam could clear out all satellites it can see out to hundreds of kilometers in altitude: it can produce a 0.8 m diameter spot at 800 km, enough for an intensity of 1.93 MW/m^2 or 193 W/cm^2. That would achieve a 1 kJ/cm^2 damage threshold in a little over 5 seconds. Medium altitude (2000 km+) or geostationary (35,786 km) satellites would be safe, but they have reduced availability (fewer in number, fewer latitudes covered and slower orbits) and either lower resolution or much higher cost.   US Marines training to use JTAC-LTD to find and designate targets. ‘Force recon’ using specialized troops and ground assets like UK’s Ajax or the Chenowth Advanced Light Strike Vehicle would remain effective. A future laser-hunting party. A major difference from today is that they cannot use simple laser designators to point out targets to an incoming wave of missiles; laser warning systems (which already come standard on tanks and helicopters) would immediately warn their targets and reveal the designators’ location. They’d have to transmit passively-collected information on the targets, which means electronic warfare activity, especially broadband jamming, can determine if that information gets out and an attack is successful.  If neither satellite nor ground reconnaissance is available, then aircraft have expose themselves to potential detection to designate targets for their weapons using onboard sensors. Thankfully, they might only need a short ‘glimpse’ to do this. We could imagine very smart cruise missiles that identify their own targets, retain stealth all the way to them, then release massed submunition attacks, as a perfect munition in a laser-interceptor environment. Then attacks won’t need to rely on much reconnaissance. Effects of cluster bomb strike when low accuracy is ... unquestioned. But, this blurs the line with autonomous weapons, can have the downside of unintended or collateral damage, and we’d still expect them to remain an expensive limited option in the future.      What about lasers ON airplanes? F-16 with a Lockheed Martin laser weapon pod. If laser generating equipment continues to get lighter and more powerful, then large lasers can be mounted on aircraft. There are already plans to install laser weapon pods on jet fighters like the F-16 or F-15. Laser pod for the F-15 from General Atomics. What could be a Self-Protect High-Energy Laser Demonstrator pod for the F-15. Even the F-35 had an upgrade path to equip it with a laser weapon that would fit inside the F-35B’s lift fan chamber; the engine shaft (with 20 MW available) would turn an alternator to generate enough electricity to run a 100 kW solid-state laser. General Atomic recently revealed plans for a 25 kW laser pod to be carried by the MQ-9B Skyguardian drone. They could even be an evolution of Direct Infrared Countermeasure systems that shine lasers at the IR seekers of aircraft and missiles. Add more and more power until they are destroying instead of merely blinding their targets. DIRCM systems already come with miniature turrets. Lasers aboard jet fighters would be limited foremost by volume, weight and cooling capacity. They’re unlikely to grow to the same scale as ground-based lasers, so flying megawatt lasers are further in the future. They might still reach the 100 kW scale. 100 kW of laser light would first serve as an electronic warfare tool: it would dazzle sensors trying to lock on to the flier and delay the 1 MW that could take it down. Is it enough to  defeat (hard-kill) laser turrets on the ground with counter-battery fire? A 1 MW laser subjects its own 1m mirror to 127 W/cm^2. If it is not blemish-free, that light will be absorbed as heat instead of reflected. The “Laser Induced Damage Threshold” for mirrors, which is the beam intensity sufficient to destroy the mirror surface, is around 10 kW/cm^2 against 535 nm light (half of the LIDT against 1070 nm, as listed). LIDT values, that can rise higher with better coating. A 100 kW laser with a 532 nm wavelength, focused by a 0.5m diameter mirror at 1.5x the diffraction limit, can produce a spot with that intensity by firing from within 22 km. Such an attack would burn and crack the turrets’ mirrors, making them unable to handle their own 1 MW beams without exploding into pieces. The trouble is that this is a relatively short distance where a counter-counter-attack by an unaffected laser turret would destroy the 100 kW platform within milliseconds. Only one laser turret can be disabled at a time, and expensive stealth jets do not want to enter a numbers contest against $10m turrets over who can let loose the most beams and the most mirrors.  Disabling strikes on laser turrets would therefore have to be conducted by a ground-skimming airplane (or helicopter!) that could quickly pop up over the horizon from that distance, or a very stealthy one could simply approach that far without being detected. Or, a sort of very expensive missile-drone is sent to accompany other long-range missiles to respond to laser interception with its own laser. It would be the direct energy weapon equivalent of a jammer mounted on a missile, an example of which is the SPEAR-EW with a jammer in its nose. Each part of this of electronic attack against enemy air defences could have a DEW counterpart. Immediately, you should think that laser turrets could be equipped with shutters that protect their mirror when they are not firing. With shutters in play, a 1 MW turret will win a damage threshold contest against a 100 kW flying laser. A laser turret, but with armored doors that close. However, the flying laser could simply try to hide among the swarm of other missiles and wait for the laser turrets to open their shutters and start burning down other targets before firing in response. It’s unclear how the use of pulsed lasers would affect the situation, as the LIDT of typical mirrors against such beams is merely 20 J/cm^2. Delivered from 20 km away through a 4 cm spot, that’s a total pulse energy of 260 J. From 100 km away, it’s 6.5 kJ. It’s unknown if aircraft could carry pulsed lasers with that performance in the next 20 years. Lasers add a whole other level of complexity to air-to-air engagements. Aircraft equipped with powerful lasers can shoot down missiles fired at them, especially from long range. At shorter distances, aircraft equipped with 100 kW lasers become lethal to each other. Northrop Grumman depiction of a laser-armed sixth generation fighter. Nations with large military budgets that can install lasers on their aircraft soonest would have a huge advantage over every other air force, as a jet that can shoot down incoming missiles and then approach for a direct-fire kill that ignores most air combat kinematics (altitude, speed, relative position) would dominate opponents without a laser. Even after lasers arrive, the more powerful beam focused by the largest mirror would outrange opponents in a head-to-head engagement. But between peer opponents, laser weapons would lead to stalemates or suicidal attacks. So, aircraft would try to exploit the terrain below. Being unable to reasonably armor themselves, they can only use solid ground as protection.  Skilled pilots would be able to hide in depressions, hug mountains and pop out for lightning-quick laser strikes or to launch a short-ranged missile that curves around cover to find its target in seconds. Funnily enough, the best aircraft at this sort of game is a helicopter. It can hover behind cover indefinitely, maneuver in all directions to deny enemy fire and only needs to expose a mirror mounted on its rotor mast to retaliate. A helicopter only needs to expose a mast-mounted laser to both see and fire at targets from behind terrain cover. Another interesting outcome is that large lumbering planes, such as the Boeing E-767 or Beriev A-50, that are thought to be increasingly at risk today from ultra-long-range ‘AWACS killer’ air-to-air missiles such as the AIM-174B or PL-17, would flip the situation once powerful lasers become available. The Airborne Laser Laboratory mounted on an NKC-135A. They can shoot down long range missiles effectively, and out-range any smaller plane with direct laser fire. That raises a defensive net around large military aircraft that may be dozens of kilometers wide. The failed ancestor of this approach is the Boeing YAL-1, which had a 1-2 MW chemical 1315 nm wavelength COIL with a 1.57 m diameter mirror.  The Boeing YAL-1 first flew in 2002 and was cancelled in 2014. Should have picked a better wavelength! Because of their affordability and effectiveness, megawatt lasers for air defence would mean most nations, and even non-national military groups, could make air strikes a very complicated and expensive affair. Modern militaries that have historically relied on the strength of their air forces will be the most affected, as they’d quickly find their hundreds of 4th generation jets (expected to operate until 2050+) and thousands of short-ranged missiles and bombs ineffective against defended sites. Their ability to deliver air strikes will have to be rebuilt using next-generation stealth craft, a slow and expensive process at best. There’d be diplomatic consequences in the meantime: a US Carrier Air Group sent sailing down the Red Sea becomes a much less potent message to surrounding nations when they can add megawatt lasers to their air defences for a few tens of millions of dollars.

yesterday 4 votes
Out of the Wild: How A.I. Is Transforming Conservation Biology

Artificial intelligence is being called a game changer for enabling scientists and conservationists to process vast troves of data collected remotely. But some warn its use could keep biologists from getting out in the field with the animals and ecosystems they are studying. Read more on E360 →

yesterday 1 votes