More from The Roots of Progress | Articles
In one sense, the concept of progress is simple, straightforward, and uncontroversial. In another sense, it contains an entire worldview. The most basic meaning of “progress” is simply advancement along a path, or more generally from one state to another that is considered more advanced by some standard. (In this sense, progress can be good, neutral, or even bad—e.g., the progress of a disease.) The question is always: advancement along what path, in what direction, by what standard? Types of progress “Scientific progress,” “technological progress,” and “economic progress” are relatively straightforward. They are hard to measure, they are multi-dimensional, and we might argue about specific examples—but in general, scientific progress consists of more knowledge, better theories and explanations, a deeper understanding of the universe; technological progress consists of more inventions that work better (more powerfully or reliably or efficiently) and enable us to do more things; economic progress consists of more production, infrastructure, and wealth. Together, we can call these “material progress”: improvements in our ability to comprehend and to command the material world. Combined with more intangible advances in the level of social organization—institutions, corporations, bureaucracy—these constitute “progress in capabilities”: that is, our ability to do whatever it is we decide on. True progress But this form of progress is not an end in itself. True progress is advancement toward the good, toward ultimate values—call this “ultimate progress,” or “progress in outcomes.” Defining this depends on axiology; that is, on our theory of value. To a humanist, ultimate progress means progress in human well-being: “human progress.” Not everyone agrees on what constitutes well-being, but it certainly includes health, happiness, and life satisfaction. In my opinion, human well-being is not purely material, and not purely hedonic: it also includes “spiritual” values such as knowledge, beauty, love, adventure, and purpose. The humanist also sees other kinds of progress contributing to human well-being: “moral progress,” such as the decline of violence, the elimination of slavery, and the spread of equal rights for all races and sexes; and more broadly “social progress,” such as the evolution from monarchy to representative democracy, or the spread of education and especially literacy. Others have different standards. Biologist David Graber called himself a “biocentrist,” by which he meant … those of us who value wildness for its own sake, not for what value it confers upon mankind. … We are not interested in the utility of a particular species, or free-flowing river, or ecosystem, to mankind. They have intrinsic value, more value—to me—than another human body, or a billion of them. … Human happiness, and certainly human fecundity, are not as important as a wild and healthy planet. By this standard, virtually all human activity is antithetical to progress: Graber called humans “a cancer… a plague upon ourselves and upon the Earth.” Or for another example, one Lutheran stated that his “primary measure of the goodness of a society is the population share which is a baptized Christian and regularly attending church.” The idea of progress isn’t completely incompatible with some flavors of environmentalism or of religion (and there are both Christians and environmentalists in the progress movement!) but these examples show that it is possible to focus on a non-human standard, such as God or Nature, to the point where human health and happiness become irrelevant or even diametrically opposed to “progress.” Unqualified progress What are we talking about when we refer to “progress” unqualified, as in “the progress of mankind” or “the roots of progress”? “Progress” in this sense is the concept of material progress, social progress, and human progress as a unified whole. It is based on the premise that progress in capabilities really does on the whole lead to progress in outcomes. This doesn’t mean that all aspects of progress move in lockstep—they don’t. It means that all aspects of progress support each other and over the long term depend on each other; they are intertwined and ultimately inseparable. Consider, for instance, how Patrick Collison and Tyler Cowen defined the term in their article calling for “progress studies”: By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. David Deutsch, in The Beginning of Infinity, is even more explicit, saying that progress includes “improvements not only in scientific understanding, but also in technology, political institutions, moral values, art, and every aspect of human welfare.” Skepticism of this idea of progress is sometimes expressed as: “progress towards what?” The undertone of this question is: “in your focus on material progress, you have lost sight of social and/or human progress.” On the premise that different forms of progress are diverging and even coming into opposition, this is an urgent challenge; on the premise that progress a unified whole, it is a valuable intellectual question but not a major dilemma. Historical progress “Progress” is also an interpretation of history according to which all these forms of progress have, by and large, been happening. In this sense, the study of “progress” is the intersection of axiology and history: given a standard of value, are things getting better? In Steven Pinker’s book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, the bulk of the chapters are devoted to documenting this history. Many of the charts in that book were sourced from Our World in Data, which also emphasizes the historical reality of progress. So-called “progress” Not everyone agrees with this concept of progress. It depends on an Enlightement worldview that includes confidence in reason and science, and a humanist morality. One argument against the idea of progress claims that material progress has not actually led to human well-being. Perhaps the benefits of progress are outweighed by the costs and risks: health hazards, technological unemployment, environmental damage, existential threats, etc. Some downplay or deny the benefits themselves, arguing that material progress doesn’t increase happiness (owing to the hedonic treadmill), that it doesn’t satisfy our spiritual values, or that it degrades our moral character. Rousseau famously asserted that “the progress of the sciences and the arts has added nothing to our true happiness” and that “our souls have become corrupted to the extent that our sciences and our arts have advanced towards perfection.” Others, as mentioned above, argue for a different standard of value altogether, such as nature or God. (Often these arguments contain some equivocation between whether these things are good in themselves, or whether we should value them because they are good for human well-being over the long term.) When people start to conclude that progress is not in fact good, they talk about this as no longer “believing in progress.” Historian Carl Becker, writing in the shadow of World War I, said that “the fact of progress is disputed and the doctrine discredited,” and asked: “May we still, in whatever different fashion, believe in the progress of mankind?” In 1991, Christopher Lasch asked: How does it happen that serious people continue to believe in progress, in the face of massive evidence that might have been expected to refute the idea of progress once and for all? Those who dispute the idea of progress often avoid the term, or quarantine it in scare quotes: so-called “progress.” When Jeremy Caradonna questioned the concept in The Atlantic, the headline was: “Is ‘Progress’ Good for Humanity?” One of the first court rulings on environmental protection law, in 1971, said that such law represented “the commitment of the Government to control, at long last, the destructive engine of material ‘progress.’” Or consider this from Guns, Germs, and Steel: … I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in human happiness. The idea of progress is inherently an idea that progress, overall, is good. If “progress” is destructive, if it does not in fact improve human well-being, then it hardly deserves the name. Contrast this with the concept of growth. “Growth,” writ large, refers to an increase in the population, the economy, and the scale of human organization and activity. It is not inherently good: everyone agrees that it is happening, but some are against it; some even define themselves by being against it (the “degrowth” movement). No one is against progress, they are only against “progress”: that is, they either believe in it, or deny it. The most important question in the philosophy of progress, then, is whether the idea of progress is valid—whether “progress” is real. “Progress” in the 19th century Before the World Wars, there was an idea of progress that went even beyond what I have defined above, and which contained at least two major errors. One error was the idea that progress is inevitable. Becker, in the essay quoted above, said that according to “the doctrine of progress,” the Idea or the Dialectic or Natural Law, functioning through the conscious purposes or the unconscious activities of men, could be counted on to safeguard mankind against future hazards. … At the present moment the world seems indeed out of joint, and it is difficult to believe with any conviction that a power not ourselves … will ever set it right. (Emphasis added.) The other was the idea that moral progress was so closely connected to material progress that they would always move together. Condorcet believed that prosperity would “naturally dispose men to humanity, to benevolence and to justice,” and that “nature has connected, by a chain which cannot be broken, truth, happiness, and virtue.” The 20th century, with the outbreak of world war and the rise of totalitarianism, proved these ideas disastrously wrong. “Progress” in the 21st century and beyond To move forward, we need a wiser, more mature idea of progress. Progress is not automatic or inevitable. It depends on choice and effort. It is up to us. Progress is not automatically good. It must be steered. Progress always creates new problems, and they don’t get solved automatically. Solving them requires active focus and effort, and this is a part of progress, too. Material progress does not automatically lead to moral progress. Technology within an evil social system can do more harm than good. We must commit to improving morality and society along with science, technology, and industry. With these lessons well learned, we can rescue the idea of progress and carry it forward into the 21st century and beyond.
What is the ideal size of the human population? One common answer is “much smaller.” Paul Ehrlich, co-author of The Population Bomb (1968), has as recently as 2018 promoted the idea that “the world’s optimum population is less than two billion people,” a reduction of the current population by about 75%. And Ehrlich is a piker compared to Jane Goodall, who said that many of our problems would go away “if there was the size of population that there was 500 years ago”—that is, around 500 million people, a reduction of over 90%. This is a static ideal of a “sustainable” population. Regular readers of this blog can cite many objections to this view. Resources are not static. Historically, as we run out of a resource (whale oil, elephant tusks, seabird guano), we transition to a new technology based on a more abundant resource—and there are basically no major examples of catastrophic resource shortages in the industrial age. The carrying capacity of the planet is not fixed, but a function of technology; and side effects such as pollution or climate change are just more problems to be solved. As long as we can keep coming up with new ideas, growth can continue. But those are only reasons why a larger population is not a problem. Is there a positive reason to want a larger population? I’m going to argue yes—that the ideal human population is not “much smaller,” but “ever larger.” Selfish reasons to want more humans Let me get one thing out of the way up front. One argument for a larger population is based on utilitarianism, specifically the version of it that says that what is good is the sum total of happiness across all humans. If each additional life adds to the cosmic scoreboard of goodness, then it’s obviously better to have more people (unless they are so miserable that their lives are literally not worth living). I’m not going to argue from this premise, in part because I don’t need to and more importantly because I don’t buy it myself. (Among other things, it leads to paradoxes such as the idea that a population of thriving, extremely happy people is not as good as a sufficiently-larger population of people who are just barely happy.) Instead, I’m going to argue that a larger population is better for every individual—that there are selfish reasons to want more humans. First I’ll give some examples of how this is true, and then I’ll draw out some of the deeper reasons for it. More geniuses First, more people means more outliers—more super-intelligent, super-creative, or super-talented people, to produce great art, architecture, music, philosophy, science, and inventions. If genius is defined as one-in-a-million level intelligence, then every billion people means another thousand geniuses—to work on all of the problems and opportunities of humanity, to the benefit of all. More progress A larger population means faster scientific, technical, and economic progress, for several reasons: Total investment. More people means more total R&D: more researchers, and more surplus wealth to invest in it. Specialization. In the economy generally, the division of labor increases productivity, as each worker can specialize and become expert at their craft (“Smithian growth”). In R&D, each researcher can specialize in their field. Larger markets support more R&D investment, which lets companies pick off higher-hanging fruit. I’ve given the example of the threshing machine: it was difficult enough to manufacture that it didn’t pay for a local artisan to make them only for their town, but it was profitable to serve a regional market. Alex Tabarrok gives the example of the market for cancer drugs expanding as large countries such as India and China become wealthier. Very high production-value entertainment, such as movies, TV, and games, are possible only because they have mass audiences. More ambitious projects need a certain critical mass of resources behind them. Ancient Egyptian civilization built a large irrigation system to make the best use of the Nile floodwaters for agriculture, a feat that would not have been possible to a small tribe or chiefdom. The Apollo Program, at its peak in the 1960s, took over 4% of the US federal budget, but 4% would not have been enough if the population and the economy were half the size. If someday humanity takes on a grand project such as a space elevator or a Dyson sphere, it will require an enormous team and an enormous wealth surplus to fund them. In fact, these factors may represent not only opportunities but requirements for progress. There is evidence that simply to maintain a constant rate of exponential economic growth requires exponentially growing investment in R&D. This investment is partly financial capital, but also partly human capital—that is, we need an exponentially growing base of researchers. One way to understand this is that if each researcher can push forward a constant “surface area” of the frontier, then as the frontier expands, a larger number of researchers is needed to keep pushing all of it forward. Two hundred years ago, a small number of scientists were enough to investigate electrical and magnetic phenomena; today, millions of scientists and engineers are productively employed working out all of the details and implications of those phenomena, both in the lab and in the electrical, electronics, and computer hardware and software industries. But it’s not even clear that each researcher can push forward a constant surface area of the frontier. As that frontier moves further out, the “burden of knowledge” grows: each researcher now has to study and learn more in order to even get to the frontier. Doing so might force them to specialize even further. Newton could make major contributions to fields as diverse as gravitation and optics, because the very basics of those fields were still being figured out; today, a researcher might devote their whole career to a sub-sub-discipline such as nuclear astrophysics. But in the long run, an exponentially growing base of researchers is impossible without an exponentially growing population. In fact, in some models of economic growth, the long-run growth rate in per-capita GDP is directly proportional to the growth rate of the population. More options Even setting aside growth and progress—looking at a static snapshot of a society—a world with more people is a world with more choices, among greater variety: Better matching for aesthetics, style, and taste. A bigger society has more cuisines, more architectural styles, more types of fashion, more sub-genres of entertainment. This also improves as the world gets more connected: for instance, the wide variety of ethnic restaurants in every major city is a recent phenomenon; it was only decades ago that pizza, to Americans, was an unfamiliar foreign cuisine. Better matching to careers. A bigger economy has more options for what to do with your life. In a hunter-gatherer society, you are lucky if you get to decide whether to be a hunter or a gatherer. In an agricultural economy, you’re probably going to be a farmer, or maybe some sort of artisan. Today there’s a much wider set of choices, from pilot to spreadsheet jockey to lab technician. Better matching to other people. A bigger world gives you a greater chance to find the perfect partner for you: the best co-founder for your business, the best lyricist for your songs, the best partner in marriage. More niche communities. Whatever your quirky interest, worldview, or aesthetic—the more people you can be in touch with, the more likely you are to find others like you. Even if you’re one in a million, in a city of ten million people, there are enough of you for a small club. In a world of eight billion, there are enough of you for a thriving subreddit. More niche markets. Similarly, in a larger, more connected economy, there are more people to economically support your quirky interests. Your favorite Etsy or Patreon creator can find the “one thousand true fans” they need to make a living. Deeper patterns When I look at the above, here are some of the underlying reasons: The existence of non-rival goods. Rival goods need to be divided up; more people just create more competition for them. But non-rival goods can be shared by all. A larger population and economy, all else being equal, will produce more non-rival goods, which benefits everyone. Economies of scale. In particular, often total costs are a combination of fixed and variable costs. The more output, the more the fixed costs can be amortized, lowering average cost. Network effects and Metcalfe’s law. Value in a network is generated not by nodes but by connections, and the more nodes there are total, the more connections are possible per node. Metcalfe’s law quantifies this: the number of possible connections in a network is proportional to the square of the number of nodes. All of these create agglomeration effects: bigger societies are better for everyone. A dynamic world I assume that when Ehrlich and Goodall advocate for much smaller populations, they aren’t literally calling for genocide or hoping for a global catastrophe (although Ehrlich is happy with coercive fertility control programs, and other anti-humanists have expressed hope for “the right virus to come along”). Even so, the world they advocate is a greatly impoverished and stagnant one: a world with fewer discoveries, fewer inventions, fewer works of creative genius, fewer cures for fewer diseases, fewer choices, fewer soulmates. A world with a large and growing population is a dynamic world that can create and sustain progress. For a different angle on the same thesis, see “Forget About Overpopulation, Soon There Will Be Too Few Humans,” by Roots of Progress fellow Maarten Boudry.
When Galileo wanted to study the heavens through his telescope, he got money from those legendary patrons of the Renaissance, the Medici. To win their favor, when he discovered the moons of Jupiter, he named them the Medicean Stars. Other scientists and inventors offered flashy gifts, such as Cornelis Drebbel’s perpetuum mobile (a sort of astronomical clock) given to King James, who made Drebbel court engineer in return. The other way to do research in those days was to be independently wealthy: the Victorian model of the gentleman scientist. Meisterdrucke Eventually we decided that requiring researchers to seek wealthy patrons or have independent means was not the best way to do science. Today, researchers, in their role as “principal investigators” (PIs), apply to science funders for grants. In the US, the NIH spends nearly $48B annually, and the NSF over $11B, mainly to give such grants. Compared to the Renaissance, it is a rational, objective, democratic system. However, I have come to believe that this principal investigator model is deeply broken and needs to be replaced. That was the thought at the top of my mind coming out of a working group on “Accelerating Science” hosted by the Santa Fe Institute a few months ago. (The thoughts in this essay were inspired by many of the participants, but I take responsibility for any opinions expressed here. My thinking on this was also influenced by a talk given by James Phillips at a previous metascience conference. My own talk at the workshop was written up here earlier.) What should we do instead of the PI model? Funding should go in a single block to a relatively large research organization of, say, hundreds of scientists. This is how some of the most effective, transformative labs in the world have been organized, from Bell Labs to the MRC Laboratory of Molecular Biology. It has been referred to as the “block funding” model. Here’s why I think this model works: Specialization A principal investigator has to play multiple roles. They have to do science (researcher), recruit and manage grad students or research assistants (manager), maintain a lab budget (administrator), and write grants (fundraiser). These are different roles, and not everyone has the skill or inclination to do them all. The university model adds teaching, a fifth role. The block organization allows for specialization: researchers can focus on research, managers can manage, and one leader can fundraise for the whole org. This allows each person to do what they are best at and enjoy, and it frees researchers from spending 30–50% of their time writing grants, as is typical for PIs. I suspect it also creates more of an opportunity for leadership in research. Research leadership involves having a vision for an area to explore that will be highly fruitful—semiconductors, molecular biology, etc.—and then recruiting talent and resources to the cause. This seems more effective when done at the block level. Side note: the distinction I’m talking about here, between block funding and PI funding, doesn’t say anything about where the funding comes from or how those decisions are made. But today, researchers are often asked to serve on committees that evaluate grants. Making funding decisions is yet another role we add to researchers, and one that also deserves to be its own specialty (especially since having researchers evaluate their own competitors sets up an inherent conflict of interest). Research freedom and time horizons There’s nothing inherent to the PI grant model that dictates the size of the grant, the scope of activities it covers, the length of time it is for, or the degree of freedom it allows the researcher. But in practice, PI funding has evolved toward small grants for incremental work, with little freedom for the researcher to change their plans or strategy. I suspect the block funding model naturally lends itself to larger grants for longer time periods that are more at the vision level. When you’re funding a whole department, you’re funding a mission and placing trust in the leadership of the organization. Also, breakthroughs are unpredictable, but the more people you have working on things, the more regularly they will happen. A lab can justify itself more easily with regular achievements. In this way one person’s accomplishment provides cover to those who are still toiling away. Who evaluates researchers In the PI model, grant applications are evaluated by funding agencies: in effect, each researcher is evaluated by the external world. In the block model, a researcher is evaluated by their manager and their peers. James Phillips illustrates with a diagram: James Phillips A manager who knows the researcher well, who has been following their work closely, and who talks to them about it regularly, can simply make better judgments about who is doing good work and whose programs have potential. (And again, developing good judgment about researchers and their potential is a specialized role—see point 1). Further, when a researcher is evaluated impersonally by an external agency, they need to write up their work formally, which adds overhead to the process. They need to explain and justify their plans, which leads to more conservative proposals. They need to show outcomes regularly, which leads to more incremental work. And funding will disproportionately flow to people who are good at fundraising (which, again, deserves to be a specialized role). To get scientific breakthroughs, we want to allow talented, dedicated people to pursue hunches for long periods of time. This means we need to trust the process, long before we see the outcome. Several participants in the workshop echoed this theme of trust. Trust like that is much stronger when based on a working relationship, rather than simply on a grant proposal. If the block model is a superior alternative, how do we move towards it? I don’t have a blueprint. I doubt that existing labs will transform themselves into this model. But funders could signal their interest in funding labs like this, and new labs could be created or proposed on this model and seek such funding. I think the first step is spreading this idea.
In December, I went to the Foresight Institute’s Vision Weekend 2023 in San Francisco. I had a lot of fun talking to a bunch of weird and ambitious geeks about the glorious abundant technological future. Here are few things I learned about (with the caveat that this is mostly based on informal conversations with only basic fact-checking, not deep research): Cellular reprogramming Aging doesn’t only happen to your body: it happens at the level of individual cells. Over time, cells accumulate waste products and undergo epigenetic changes that are markers of aging. But wait—when a baby is born, it has young cells, even though it grew out of cells that were originally from its older parents. That is, the egg and sperm cells might be 20, 30, or 40 years old, but somehow when they turn into a baby, they get reset to biological age zero. This process is called “reprogramming,” and it happens soon after fertilization. It turns out that cell reprogramming can be induced by certain proteins, known as the Yamanaka factors, after their discoverer (who won a Nobel for this in 2012). Could we use those proteins to reprogram our own cells, making them youthful again? Maybe. There is a catch: the Yamanaka factors not only clear waste out of cells, they also reset them to become stem cells. You do not want to turn every cell in your body into a stem cell. You don’t even want to turn a small number of them into stem cells: it can give you cancer (which kind of defeats the purpose of a longevity technology). But there is good news: when you expose cells to the Yamanaka factors, the waste cleanup happens first, and the stem cell transformation happens later. If we can carefully time the exposure, maybe we can get the target effect without the damaging side effects. This is tricky: different tissues respond on different timelines, so you can’t apply the treatment uniformly over the body. There are a lot of details to be worked out here. But it’s an intriguing line of research for longevity, and it’s one of the avenues being explored at Retro Bio, among other places. Here’s a Derek Lowe article with more info and references. The BFG orbital launch system If we’re ever going to have a space economy, it has to be a lot cheaper to launch things into space. Space Shuttle launches cost over $65,000/kg, and even the Falcon Heavy costs $1500/kg. Compare to shipping costs on Earth, which are only a few dollars per kilogram. A big part of the high launch cost in traditional systems is the rocket, which is discarded with each launch. SpaceX is bringing costs down by making reusable rockets that land gently rather than crashing into the ocean, and by making very big rockets for economies of scale (Elon Musk has speculated that Starship could bring costs as low as $10/kg, although this is a ways off, since right now fuel costs alone are close to that amount). But what if we didn’t need a rocket at all? Rockets are pretty much our only option for propulsion in space, but what if we could give most of the impulse to the payload on Earth? J. Storrs Hall has proposed the “space pier,” a runway 300 km long mounted atop towers 100 km tall. The payload takes an elevator 100 km up to the top of the tower, thus exiting the atmosphere and much of Earth’s gravity well. Then a linear induction motor accelerates it into orbit along the 300 km track. You could do this with a mere 10 Gs of acceleration, which is survivable by human passengers. Think of it like a Big Friendly Giant (BFG) picking up your payload and then throwing it into orbit. Hall estimates that this could bring launch costs down to $10/kg, if the pier could be built for a mere $10 billion. The only tiny little catch with the space pier is that there is no technology in existence that could build it, and no construction material that a 100 km tower could be made of. Hall suggests that with “mature nanotechnology” we could build the towers out of diamond. OK. So, probably not going to happen this decade. What can we do now, with today’s technology? Let’s drop the idea of using this for human passengers and just consider relatively durable freight. Now we can use much higher G-forces, which means we don’t need anything close to 300 km of distance to accelerate over. And, does it really have to be 100 km tall? Yes, it’s nice to start with an altitude advantage, and with no atmosphere, but both of those problems can be overcome with sufficient initial velocity. At this point we’re basically just talking about an enormous cannon (a very different kind of BFG). This is what Longshot Space is doing. Build a big long tube in the desert. Put the payload in it, seal the end with a thin membrane, and pump the air out to create a vacuum. Then rapidly release some compressed gasses behind the payload, which bursts through the membrane and exits the tube at Mach 25. One challenge with this is that a gas can only expand as fast as the speed of sound in that gas. In air this is, of course, a lot less than Mach 25. One thing that helps is to use a lighter gas, in which the speed of sound is higher, such as helium or (for the very brave) hydrogen. Another part of the solution is to give the payload a long, wedge-shaped tail. The expanding gasses push sideways on this tail, which through the magic of simple machines translates into a much faster push forwards. There’s a brief discussion and illustration of the pneumatics in this video. Now, if you are trying to envision “big long tube in the desert”, you might be wondering: is the tube angled upwards or something? No. It is basically lying flat on the ground. It is expensive to build a long straight thing that points up: you have to dig a deep hole and/or build a tall tower. What about putting it on the side of a mountain, which naturally points up? Building things on mountains is also hard; in addition, mountains are special and nobody wants to give you one. It’s much easier to haul lots of materials into the middle of the desert; also there is lots of room out there and the real estate is cheap. Next you might be wondering: if the tube is horizontal, isn’t it pointed in the wrong direction to get to space? I thought space was up? Well, yes. There are a few things going on here. One is that if you travel far enough in a straight line, the Earth will curve away from you and you will eventually find yourself in space. Another is that if you shape the projectile such that its center of pressure is in the right place relative to its center of mass, then it will naturally angle upward when it hits the atmosphere. Lastly, if you are trying to get into orbit, most of the velocity you need is actually horizontal anyway. In fact, if and when you reach a circular orbit, you will find that all of your velocity is horizontal. This means that there is no way to get into orbit purely ballistically, with a single impulse imparted from Earth. Any satellite, for instance, launched via this system will need its own rocket propulsion in order to circularize the orbit once it reaches altitude (even leaving aside continual orbital adjustments during its service lifetime). But we’re now talking about a relatively small rocket with a small amount of fuel, not the big multi-stage things that you need to blast off from the surface. And presumably someday we will be delivering food, fuel, tools, etc. to space in packages that just need to be caught by whoever is receiving them. Longshot estimates that this system, like Starship or the space pier, could get launch costs down to about $10/kg. This might be cheap enough that launch prices could be zero, subsidized by contracts to buy fuel or maintenance, in a space-age version of “give away the razor and sell the blades.” Not only would this business model help grow the space economy, it would also prove wrong all the economists who have been telling us for decades that “there’s no such thing as a free launch.” Mars could be terraformed in our lifetimes Terraforming a planet sounds like a geological process, and so I had sort of thought that it would require geological timescales, or if it could really be accelerated, at least a matter of centuries or so. You drop off some algae or something on a rocky planet, and then your distant descendants return one day to find a verdant paradise. So I was surprised to learn that major changes on Mars could, in principle, be made on a schedule much shorter than a single human lifespan. Let’s back up. Mars is a real fixer-upper of a planet. Its temperature varies widely, averaging about −60º C; its atmosphere is thin and mostly carbon dioxide. This severely depresses its real estate values. Suppose we wanted to start by significantly warming the planet. How do you do that? Let’s assume Mars’s orbit cannot be changed—I mean, we’re going to get in enough trouble with the Sierra Club as it is—so the total flux of solar energy reaching the planet is constant. What we can do is to trap a bit more of that energy on the planet, and prevent it from radiating out into space. In other words, we need to enhance Mars’s greenhouse effect. And the way to do that is to give it a greenhouse gas. Wait, we just said that Mars’s atmosphere is mostly CO2, which is a notorious greenhouse gas, so why isn’t Mars warm already? It’s just not enough: the atmosphere is very thin (less than 1% of the pressure of Earth’s atmosphere), and what CO2 there is only provides about 5º of warming. We’re going to need to add more GHG. What could it be? Well, for starters, given the volumes required, it should be composed of elements that already exist on Mars. With the ingredients we have, what can we make? Could we get more CO2 in the atmosphere? There is more CO2 on/under the surface, in frozen form, but even that is not enough for the task. We need something else. What about CFCs? As a greenhouse gas, they are about four orders of magnitude more efficient than CO2, so we’d need a lot less of them. However, they require fluorine, which is very rare in the Martian soil, and we’d still need about 100 gigatons of it. This is not encouraging. One thing Mars does have a good amount of is metal, such as iron, aluminum, and magnesium. Now metals, you might be thinking, are not generally known as greenhouse gases. But small particles of conductive metal, with the right size and shape, can act as one. A recent paper found through simulation that “nanorods” about 9 microns long, half the wavelength of the infrared thermal radiation given off by a planet, would scatter that radiation back to the surface (Ansari, Kite, Ramirez, Steele, and Mohseni, “Warming Mars with artificial aerosol appears to be feasible”—no preprint online, but this poster seems to represent earlier work). Suppose we aim to warm the planet by about 30º C, enough to melt surface water in the polar regions during the summer, and bring Mars much closer to Earth temperatures. AKRSM’s simulation says that we would need to put about 400 mg/m3 of nanorods into the Martian sky, an efficiency (in warming per unit mass) more than 2000x greater than previously proposed methods. The particles would settle out of the atmosphere slowly, at less than 1/100 the rate of natural Mars dust, so only about 30 liters/sec of them would need to be released continuously. If we used iron, this would require mining a million cubic meters of iron per year—quite a lot, but less than 1% of what we do on Earth. And the particles, like other Martian dust, would be lifted high in the atmosphere by updrafts, so they could be conveniently released from close to the surface. Wouldn’t metal nanoparticles be potentially hazardous to breathe? Yes, but this is already a problem from Mars’s naturally dusty atmosphere, and the nanorods wouldn’t make it significantly worse. (However, this will have to be solved somehow if we’re going to make Mars habitable.) Kite told me that if we started now, given the capabilities of Starship, we could achieve the warming in a mere twenty years. Most of that time is just getting equipment to Mars, mining the iron, manufacturing the nanorods, and then waiting about a year for Martian winds to mix them throughout the atmosphere. Since Mars has no oceans to provide thermal inertia, the actual warming after that point only takes about a month. Kite is interested in talking to people about the design of a the nanorod factory. He wants to get a size/weight/power estimate and an outline design for the factory, to make an initial estimate of how many Starship landings would be needed. Contact him at edwin.kite@gmail.com. I have not yet gotten Kite and Longshot together to figure out if we can shoot the equipment directly to Mars using one really enormous space cannon. Thanks to Reason, Mike Grace, and Edwin Kite for conversations and for commenting on a draft of this essay. Any errors or omissions above are entirely my own.
More in science
Every soft caress of wind, searing burn and seismic rumble is detected by our skin’s tangle of touch sensors. David Ginty has spent his career cataloging the neurons beneath everyday sensations. The post Touch, Our Most Complex Sense, Is a Landscape of Cellular Sensors first appeared on Quanta Magazine
The topography of Colombia is dominated by the Andes. While manifested as a single mountain range from Ecuador southwards, the mountains split into three ranges (or cordilleras) near the Colombia/Ecuador border, and these three ranges span the length of Colombia from this southern border towards Venezuela in the northeast. Despite the vast amount of territory contained by these cordilleras (and their associated river valleys), around half of Colombia consists of flat lowlands, especially east of the mountains. Burrowing Owls In general, this eastern half of Colombia consists of the humid Amazon rainforest to the south and drier plains to the north. These plains (los llanos in Spanish) are productive areas for raising cattle and other livestock, similar to the plains found in parts of Argentina, Paraguay, Uruguay, and Brazil in the south of the continent. Forests are more limited and the land is a mosaic of seasonal wetlands, open savannahs, palm swamps and gallery forest along the productive rivers. After the conclusion of the main tour, four travellers joined local guide Cris and I for a visit to these eastern plains. We would be staying at Juan Solito Ecolodge, located within a massive ranch called Hato La Aurora. Cattle roam across the landscape, coexisting with the abundant wildlife that thrives in this region. The ecological health of this ranch is relatively high since the cattle are at a low density and all of the original forest cover has been preserved. Jaguars in particular can be found in good numbers while Green Anacondas are frequently observed in the numerous wetlands dotting the property. Orinoco Geese Sharp-tailed Ibis Our visit would be coinciding with the dry season. While at certain times of the year the wetlands stretch across the landscape, at this time of year they are much reduced in size. This concentrates the many mammals, reptiles and birds that rely on these life-giving wetlands. Capybaras (Hydrochoerus hydrochaeris) Scarlet Ibises The temperatures are very high here in the lowlands and so we prioritized being out early and late in the day when species are the most active. Dawn in this region is spectacular with nearly every bird being vocal and we typically crossed the 100 species threshold each day by 8 or 9 AM. Burnished-buff Tanager Black-crested Antshrike Masked Cardinal Double-striped Thick-knee Nacunda Nighthawk Several birds found here are endemic to the plains of northeastern Colombia and western Venezuela, including Pale-headed Jacamar, Venezuelan Troupial, Sharp-tailed Ibis, Crestless Curassow and White-bearded Flycatcher. We succeeded with all of them, with the jacamar, flycatcher and ibis easily found around the lodge! Pale-headed Jacamar White-bearded Flycatcher Venezuelan Troupial Much of our exploration was done from a safari-style pick-up truck that had two rows of padded seats in the bed and a roof sheltering us from the sun. Our truck We followed dirt tracks throughout the vast expanses of the ranch, visiting various wetlands and forest habitats. The wetland birds were especially numerous - seven species of ibis, hundreds of White-faced and Black-bellied Whistling-Ducks, herds of Capybaras, innumerable herons, egrets and jacanas, and much more. In our four-night stay we found around 180 bird species. Scarlet Macaw Large-billed Tern Capybara (Hydrochoerus hydrochaeris) Roseate Spoonbill We kept an eye out for reptiles and encountered quite a few species, including a couple of big targets, figuratively and literally! We found an adult Orinoco Crocodile (along with dozens of Spectacled Caimans). The Orinoco Crocodile is a critically endangered species endemic to this ecoregion, and only a few hundred remain in the wild. Orinoco Crocodile (Crocodylus intermedius) Orinoco Crocodile (Crocodylus intermedius) The other "big" target was Green Anaconda, and we succeeded with finding three individuals! Most impressive was a huge female, likely over 5 meters in length, that was mating with a much smaller male in a shallow wetland. This was, by far, the biggest snake I had ever seen. Green Anaconda (Eunectes murinus) Green Anaconda Other reptile highlights included Cryptic Golden Tegu and Savannah Side-necked Turtle. Savannah Side-necked Turtle (Podocnemis vogli) Though our main target was undoubtedly the Jaguar, I still placed the odds of finding this secretive species fairly low. We had struck out on the previous trip in 2022 and I did not want to get my hopes up. And during our first few days, we had no luck despite spending some time in some of the better areas where they are occasionally seen. Then, one afternoon as we were bumping along a dirt track, our local guide Jovani suddenly shout-whispered "Jaguar! Jaguar!". There, only 50 meters from us, was this absolutely magnificent Jaguar slinking through the grasses. The encounter lasted only around 15 seconds or so but it was unforgettable. Jaguar (Panthera onca) Jaguar (Panthera onca) The Jaguar was an exciting way to close out an amazing tour extension to the Juan Solito Ecolodge. I hope to return one day!
Apply to come to our premier event for students
For more than a century, women and racial minorities have fought for access to education and employment opportunities once reserved exclusively for white men. The life of Yvonne Young “Y.Y.” Clark is a testament to the power of perseverance in that fight. As a smart Black woman who shattered the barriers imposed by race and gender, she made history multiple times during her career in academia and industry. She probably is best known as the first woman to serve as a faculty member in the engineering college at Tennessee State University, in Nashville. Her pioneering spirit extended far beyond the classroom, however, as she continuously staked out new territory for women and Black professionals in engineering. She accomplished a lot before she died on 27 January 2019 at her home in Nashville at the age of 89. Clark is the subject of the latest biography in IEEE-USA’s Famous Women Engineers in History series. “Don’t Give Up” was her mantra. An early passion for technology Born on 13 April 1929 in Houston, Clark moved with her family to Louisville, Ky., as a baby. She was raised in an academically driven household. Her father, Dr. Coleman M. Young Jr., was a surgeon. Her mother, Hortense H. Young, was a library scientist and journalist. Her mother’s “Tense Topics” column, published by the Louisville Defender newspaper, tackled segregation, housing discrimination, and civil rights issues, instilling awareness of social justice in Y.Y. Clark’s passion for technology became evident at a young age. As a child, she secretly repaired her family’s malfunctioning toaster, surprising her parents. It was a defining moment, signaling to her family that she was destined for a career in engineering—not in education like her older sister, a high school math teacher. “Y.Y.’s family didn’t create her passion or her talents. Those were her own,” said Carol Sutton Lewis, co-host and producer for the third season of the “Lost Women of Science” podcast, on which Clark was profiled. “What her family did do, and what they would continue to do, was make her interests viable in a world that wasn’t fair.” Clark’s interest in studying engineering was precipitated by her passion for aeronautics. She said all the pilots she spoke with had studied engineering, so she was determined to do so. She joined the Civil Air Patrol and took simulated flying lessons. She then learned to fly an airplane with the help of a family friend. Despite her academic excellence, though, racial barriers stood in her way. She graduated at age 16 from Louisville’s Central High School in 1945. Her parents, concerned that she was too young to attend college, sent her to Boston for two additional years at the Girls’ Latin School and Roxbury Memorial High School. She then applied to the University of Louisville, where she was initially accepted and offered a full scholarship. When university administrators realized she was Black, however, they rescinded the scholarship and the admission, Clark said on the “Lost Women of Science” podcast, which included clips from when her daughter interviewed her in 2007. As Clark explained in the interview, the state of Kentucky offered to pay her tuition to attend Howard University, a historically Black college in Washington, D.C., rather than integrate its publicly funded university. Breaking barriers in higher education Although Howard provided an opportunity, it was not free of discrimination. Clark faced gender-based barriers, according to the IEEE-USA biography. She was the only woman among 300 mechanical engineering students, many of whom were World War II veterans. “Y.Y.’s family didn’t create her passion or her talents. Those were her own. What her family did do, and what they would continue to do, was make her interests viable in a world that wasn’t fair.” —Carol Sutton Lewis Despite the challenges, she persevered and in 1951 became the first woman to earn a bachelor’s degree in mechanical engineering from the university. The school downplayed her historic achievement, however. In fact, she was not allowed to march with her classmates at graduation. Instead, she received her diploma during a private ceremony in the university president’s office. A career defined by firsts Determined to forge a career in engineering, Clark repeatedly encountered racial and gender discrimination. In a 2007 Society of Women Engineers (SWE) StoryCorps interview, she recalled that when she applied for an engineering position with the U.S. Navy, the interviewer bluntly told her, “I don’t think I can hire you.” When she asked why not, he replied, “You’re female, and all engineers go out on a shakedown cruise,” the trip during which the performance of a ship is tested before it enters service or after it undergoes major changes such as an overhaul. She said the interviewer told her, “The omen is: ‘No females on the shakedown cruise.’” Clark eventually landed a job with the U.S. Army’s Frankford Arsenal gauge laboratories in Philadelphia, becoming the first Black woman hired there. She designed gauges and finalized product drawings for the small-arms ammunition and range-finding instruments manufactured there. Tensions arose, however, when some of her colleagues resented that she earned more money due to overtime pay, according to the IEEE-USA biography. To ease workplace tensions, the Army reduced her hours, prompting her to seek other opportunities. Her future husband, Bill Clark, saw the difficulty she was having securing interviews, and suggested she use the gender-neutral name Y.Y. on her résumé. The tactic worked. She became the first Black woman hired by RCA in 1955. She worked for the company’s electronic tube division in Camden, N.J. Although she excelled at designing factory equipment, she encountered more workplace hostility. “Sadly,” the IEEE-USA biography says, she “felt animosity from her colleagues and resentment for her success.” When Bill, who had taken a faculty position as a biochemistry instructor at Meharry Medical College in Nashville, proposed marriage, she eagerly accepted. They married in December 1955, and she moved to Nashville. In 1956 Clark applied for a full-time position at Ford Motor Co.’s Nashville glass plant, where she had interned during the summers while she was a Howard student. Despite her qualifications, she was denied the job due to her race and gender, she said. She decided to pursue a career in academia, becoming in 1956 the first woman to teach mechanical engineering at Tennessee State University. In 1965 she became the first woman to chair TSU’s mechanical engineering department. While teaching at TSU, she pursued further education, earning a master’s degree in engineering management from Nashville’s Vanderbilt University in 1972—another step in her lifelong commitment to professional growth. After 55 years with the university, where she was also a freshman student advisor for much of that time, Clark retired in 2011 and was named professor emeritus. A legacy of leadership and advocacy Clark’s influence extended far beyond TSU. She was active in the Society of Women Engineers after becoming its first Black member in 1951. Racism, however, followed her even within professional circles. At the 1957 SWE conference in Houston, the event’s hotel initially refused her entry due to segregation policies, according to a 2022 profile of Clark. Under pressure from the society’s leadership, the hotel compromised; Clark could attend sessions but had to be escorted by a white woman at all times and was not allowed to stay in the hotel despite having paid for a room. She was reimbursed and instead stayed with relatives. As a result of that incident, the SWE vowed never again to hold a conference in a segregated city. Over the decades, Clark remained a champion for women in STEM. In one SWE interview, she advised future generations: “Prepare yourself. Do your work. Don’t be afraid to ask questions, and benefit by meeting with other women. Whatever you like, learn about it and pursue it. “The environment is what you make it. Sometimes the environment is hostile, but don’t worry about it. Be aware of it so you aren’t blindsided.” Her contributions earned her numerous accolades including the 1998 SWE Distinguished Engineering Educator Award and the 2001 Tennessee Society of Professional Engineers Distinguished Service Award. A lasting impression Clark’s legacy was not confined to engineering; she was deeply involved in Nashville community service. She served on the board of the 18th Avenue Family Enrichment Center and participated in the Nashville Area Chamber of Commerce. She was active in the Hendersonville Area chapter of The Links, a volunteer service organization for Black women, and the Nashville alumnae chapter of the Delta Sigma Theta sorority. She also mentored members of the Boy Scouts, many of whom went on to pursue engineering careers. Clark spent her life knocking down barriers that tried to impede her. She didn’t just break the glass ceiling—she engineered a way through it for people who came after her.
[Note that this article is a transcript of the video embedded above.] Late in the night of Valentine’s Day 2014, air monitors at an underground nuclear waste repository outside Carlsbad, New Mexico, detected the release of radioactive elements, including americium and plutonium, into the environment. Ventilation fans automatically switched on to exhaust contaminated air up through a shaft, through filters, and out to the environment above ground. When filters were checked the following morning, technicians found that they contained transuranic materials, highly radioactive particles that are not naturally found on Earth. In other words, a container of nuclear waste in the repository had been breached. The site was shut down and employees sent home, but it would be more than a year before the bizarre cause of the incident was released. I’m Grady, and this is Practical Engineering. The dangers of the development of nuclear weapons aren’t limited to mushroom clouds and doomsday scenarios. The process of creating the exotic, transuranic materials necessary to build thermonuclear weapons creates a lot of waste, which itself is uniquely hazardous. Clothes, tools, and materials used in the process may stay dangerously radioactive for thousands of years. So, a huge part of working with nuclear materials is planning how to manage waste. I try not to make predictions about the future, but I think it’s safe to say that the world will probably be a bit different in 10,000 years. More likely, it will be unimaginably different. So, ethical disposal of nuclear waste means not only protecting ourselves but also protecting whoever is here long after we are ancient memories or even forgotten altogether. It’s an engineering challenge pretty much unlike any other, and it demands some creative solutions. The Waste Isolation Pilot Plant, or WIPP, was built in the 1980s in the desert outside Carlsbad, New Mexico, a site selected for a very specific reason: salt. One of the most critical jobs for long-term permanent storage is to keep radioactive waste from entering groundwater and dispersing into the environment. So, WIPP was built inside an enormous and geologically stable formation of salt, roughly 2000 feet or 600 meters below the surface. The presence of ancient salt is an indication that groundwater doesn’t reach this area since the water would dissolve it. And the salt has another beneficial behavior: it’s mobile. Over time, the walls and ceilings of mined-out salt tend to act in a plastic manner, slowly creeping inwards to fill the void. This is ideal in the long term because it will ultimately entomb the waste at WIPP in a permanent manner. It does make things more complicated in the meantime, though, since they have to constantly work to keep the underground open during operation. This process, called “ground control,” involves techniques like drilling and installing roof bolts in epoxy to hold up the ceilings. I have an older video on that process if you want to learn more after this. The challenge in this case is that, eventually, we want the roof bolts to fail, allowing a gentle collapse of salt to fill the void because it does an important job. The salt, and just being deep underground in general, acts to shield the environment from radiation. In fact, a deep salt mine is such a well-shielded area that there’s an experimental laboratory located in WIPP across on the other side of the underground from the waste panels where various universities do cutting-edge physics experiments precisely because of the low radiation levels. The thousands of feet of material above the lab shield it from cosmic and solar radiation, and the salt has much lower levels of inherent radioactivity than other kinds of rock. Imagine that: a low-radiation lab inside a nuclear waste dump. Four shafts extend from the surface into the underground repository for moving people, waste, and air into and out of the facility. Room-and-pillar mining is used to excavate horizontal drifts or panels where waste is stored. Investigators were eventually able to re-enter the repository and search for the cause of the breach. They found the source in Panel 7, Room 7, the area of active disposal at the time. Pressure and heat had burst a drum, starting a fire, damaging nearby containers, and ultimately releasing radioactive materials into the air. On activation of the radiation alarm, the underground ventilation system automatically switched to filtration mode, sending air through massive HEPA filters. Interestingly, although they’re a pretty common consumer good now, High Efficiency Particulate Air, or HEPA, filters actually got their start during the Manhattan Project specifically to filter radionuclides from the air. The ventilation system at WIPP performed well, although there was some leakage past the filters, allowing a small percentage of radioactive material to bypass the filters and release directly into the atmosphere at the surface. 21 workers tested positive for low-level exposure to radioactive contamination but, thankfully, were unharmed. Both WIPP and independent testing organizations confirmed that detected levels were very low, the particles did not spread far, and were extremely unlikely to result in radiation-related health effects to workers or the public. Thankfully, the safety features at the facility worked, but it would take investigators much longer to understand what went wrong in the first place, and that involved tracing that waste barrel back to its source. It all started at the Los Alamos National Laboratory, one of the labs created as part of the 1940s Manhattan Project that first developed atomic bombs in the desert of New Mexico. The 1970s brought a renewed interest in cleaning up various Department of Energy sites. Los Alamos was tasked with recovering plutonium from residue materials left over from previous wartime and research efforts. That process involved using nitric acid to separate plutonium from uranium. Once plutonium is extracted, you’re left with nitrate solutions that get neutralized or evaporated, creating a solid waste stream that contains residual radioactive isotopes. In 1985, a volume of this waste was placed in a lead-lined 55-gallon drum along with an absorbent to soak up any moisture and put into temporary storage at Los Alamos, where it sat for years. But in the summer of 2011, the Las Conchas wildfire threatened the Los Alamos facility, coming within just a few miles of the storage area. This actual fire lit a metaphorical fire under various officials, and wheels were set into motion to get the transuranic waste safely into a long-term storage facility. In other words, ship it down the road to WIPP. Transporting transuranic wastes on the road from one facility to another is quite an ordeal, even when they’re only going through the New Mexican desert. There are rules preventing the transportation of ignitable, corrosive, or reactive waste, and special casks are required to minimize the risk of radiological release in the unlikely event of a crash. WIPP also had rules about how waste can be packaged in order to be placed for long-term disposal called the Waste Acceptance Criteria, which included limits on free liquids. Los Alamos concluded that barrel didn’t meet the requirements and needed to be repackaged before shipping to WIPP. But, there were concerns about which absorbent to use. Los Alamos used various absorbent materials within waste barrels over the years to minimize the amount of moisture and free liquid inside. Any time you’re mixing nuclear waste with another material, you have to be sure there won’t be any unexpected reactions. The procedure for repackaging nitrate salts required that a superabsorbent polymer be used, similar to the beads I’ve used in some of my demos, but concerns about reactivity led to meetings and investigations about whether it was the right material for the job. Ultimately, Los Alamos and their contractors concluded that the materials were incompatible and decided to make a switch. In May 2012, Los Alamos published a white paper titled “Amount of Zeolite Required to Meet the Constraints Established by the EMRTC Report RF 10-13: Application of LANL Evaporator Nitrate Salts.” In other words, “How much kitty litter should be added to radioactive waste?” The answer was about 1.2 to 1, inorganic zeolite clay to nitrate salt waste, by volume. That guidance was then translated into the actual procedures that technicians would use to repackage the waste in gloveboxes at Los Alamos. But something got lost in translation. As far as investigators could determine, here’s what happened: In a meeting in May 2012, the manager responsible for glovebox operations took personal notes about this switch in materials. Those notes were sent in an email and eventually incorporated into the written procedures: “Ensure an organic absorbent is added to the waste material at a minimum of 1.5 absorbent to 1 part waste ratio.” Did you hear that? The white paper’s requirement to use an inorganic absorbent became “...an organic absorbent” in the procedures. We’ll never know where the confusion came from, but it could have been as simple as mishearing the word in the meeting. Nonetheless, that’s what the procedure became. Contractors at Los Alamos procured a large quantity of Swheat Scoop, an organic, wheat-based cat litter, and started using it to repackage the nitrate salt wastes. Our barrel first packaged in 1985 was repackaged in December 2013 with the new kitty litter. It was tested and certified in January 2014, shipped to WIPP later that month, and placed underground. And then it blew up. The unthinkable had happened; the wrong kind of kitty litter had caused a nuclear disaster. While the nitrates are relatively unreactive with inorganic, mineral-based zeolite kitty litter that should have been used, the organic, carbon-based wheat material could undergo oxidation reactions with nitrate wastes. I think it’s also interesting to note here that the issue is a reaction that was totally unrelated to the presence of transuranic waste. It was a chemical reaction - not a nuclear reaction - that caused the problem. Ultimately, the direct cause of the incident was determined to be “an exothermic reaction of incompatible materials in LANL waste drum 68660 that led to thermal runaway, which resulted in over-pressurization of the drum, breach of the drum, and release of a portion of the drum’s contents (combustible gases, waste, and wheat-based absorbent) into the WIPP underground.” Of course, the root cause is deeper than that and has to do with systemic issues at Los Alamos and how they handled the repackaging of the material. The investigation report identified 12 contributing causes that, while individually did not cause the accident, increased the likelihood or severity of it. These are written in a way that is pretty difficult for a non-DOE expert to parse: take a stab at digesting contributing cause number 5: “Failure of Los Alamos Field Office (NA-LA) and the National Transuranic (TRU) Program/Carlsbad Field Office (CBFO) to ensure that the CCP [that is, the Central Characterization Program] and LANS [that is, that is the contractor, Los Alamos National Security] complied with Resource Conservation and Recovery Act (RCRA) requirements in the WIPP Hazardous Waste Facility Permit (HWFP) and the LANL HWFP, as well as the WIPP Waste Acceptance Criteria (WAC).” Still, as bad as it all seems, it really could have been a lot worse. In a sense, WIPP performed precisely how you’d want it to in such an event, and it’s a really good thing the barrel was in the underground when it burst. Had the same happened at Los Alamos or on the way to WIPP, things could have been much worse. Thankfully, none of the other barrels packaged in the same way experienced a thermal runaway, and they were later collected and sealed in larger containers. Regardless, the consequences of the “cat-astrophe” were severe and very expensive. The cleanup involved shutting down the WIPP facility for several years and entirely replacing the ventilation system. WIPP itself didn’t formally reopen until January of 2017, nearly three full years after the incident, with the cleanup costing about half a billion dollars. Today, WIPP remains controversial, not least because of shifting timelines and public communication. Early estimates once projected closure by 2024. Now, that date is sometime between 2050 and 2085. And events like this only add fuel to the fire. Setting aside broader debates on nuclear weapons themselves, the wastes these weapons generate are dangerous now, and they will remain dangerous for generations. WIPP has even explored ideas on how to mark the site post-closure, making sure that future generations clearly understand the enduring danger. Radioactive hazards persist long after languages and societies may have changed beyond recognition, making it essential but challenging to communicate clearly about risks. Sometimes, it’s easy to forget - amidst all the technical complexity and bureaucratic red tape that surrounds anything nuclear - that it’s just people doing the work. It’s almost unbelievable that we entrust ourselves - squishy, sometimes hapless bags of water, meat, and bones - to navigate protocols of such profound complexity needed to safely take advantage of radioactive materials. I don’t tell this story because I think we should be paralyzed by the idea of using nuclear materials - there are enormous benefits to be had in many areas of science, engineering, and medicine. But there are enormous costs as well, many of which we might not be aware of if we don’t make it a habit to read obscure government investigation reports. This event is a reminder that the extent of our vigilance has to match the permanence of the hazards we create.