More from The Roots of Progress | Articles
In one sense, the concept of progress is simple, straightforward, and uncontroversial. In another sense, it contains an entire worldview. The most basic meaning of “progress” is simply advancement along a path, or more generally from one state to another that is considered more advanced by some standard. (In this sense, progress can be good, neutral, or even bad—e.g., the progress of a disease.) The question is always: advancement along what path, in what direction, by what standard? Types of progress “Scientific progress,” “technological progress,” and “economic progress” are relatively straightforward. They are hard to measure, they are multi-dimensional, and we might argue about specific examples—but in general, scientific progress consists of more knowledge, better theories and explanations, a deeper understanding of the universe; technological progress consists of more inventions that work better (more powerfully or reliably or efficiently) and enable us to do more things; economic progress consists of more production, infrastructure, and wealth. Together, we can call these “material progress”: improvements in our ability to comprehend and to command the material world. Combined with more intangible advances in the level of social organization—institutions, corporations, bureaucracy—these constitute “progress in capabilities”: that is, our ability to do whatever it is we decide on. True progress But this form of progress is not an end in itself. True progress is advancement toward the good, toward ultimate values—call this “ultimate progress,” or “progress in outcomes.” Defining this depends on axiology; that is, on our theory of value. To a humanist, ultimate progress means progress in human well-being: “human progress.” Not everyone agrees on what constitutes well-being, but it certainly includes health, happiness, and life satisfaction. In my opinion, human well-being is not purely material, and not purely hedonic: it also includes “spiritual” values such as knowledge, beauty, love, adventure, and purpose. The humanist also sees other kinds of progress contributing to human well-being: “moral progress,” such as the decline of violence, the elimination of slavery, and the spread of equal rights for all races and sexes; and more broadly “social progress,” such as the evolution from monarchy to representative democracy, or the spread of education and especially literacy. Others have different standards. Biologist David Graber called himself a “biocentrist,” by which he meant … those of us who value wildness for its own sake, not for what value it confers upon mankind. … We are not interested in the utility of a particular species, or free-flowing river, or ecosystem, to mankind. They have intrinsic value, more value—to me—than another human body, or a billion of them. … Human happiness, and certainly human fecundity, are not as important as a wild and healthy planet. By this standard, virtually all human activity is antithetical to progress: Graber called humans “a cancer… a plague upon ourselves and upon the Earth.” Or for another example, one Lutheran stated that his “primary measure of the goodness of a society is the population share which is a baptized Christian and regularly attending church.” The idea of progress isn’t completely incompatible with some flavors of environmentalism or of religion (and there are both Christians and environmentalists in the progress movement!) but these examples show that it is possible to focus on a non-human standard, such as God or Nature, to the point where human health and happiness become irrelevant or even diametrically opposed to “progress.” Unqualified progress What are we talking about when we refer to “progress” unqualified, as in “the progress of mankind” or “the roots of progress”? “Progress” in this sense is the concept of material progress, social progress, and human progress as a unified whole. It is based on the premise that progress in capabilities really does on the whole lead to progress in outcomes. This doesn’t mean that all aspects of progress move in lockstep—they don’t. It means that all aspects of progress support each other and over the long term depend on each other; they are intertwined and ultimately inseparable. Consider, for instance, how Patrick Collison and Tyler Cowen defined the term in their article calling for “progress studies”: By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. David Deutsch, in The Beginning of Infinity, is even more explicit, saying that progress includes “improvements not only in scientific understanding, but also in technology, political institutions, moral values, art, and every aspect of human welfare.” Skepticism of this idea of progress is sometimes expressed as: “progress towards what?” The undertone of this question is: “in your focus on material progress, you have lost sight of social and/or human progress.” On the premise that different forms of progress are diverging and even coming into opposition, this is an urgent challenge; on the premise that progress a unified whole, it is a valuable intellectual question but not a major dilemma. Historical progress “Progress” is also an interpretation of history according to which all these forms of progress have, by and large, been happening. In this sense, the study of “progress” is the intersection of axiology and history: given a standard of value, are things getting better? In Steven Pinker’s book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, the bulk of the chapters are devoted to documenting this history. Many of the charts in that book were sourced from Our World in Data, which also emphasizes the historical reality of progress. So-called “progress” Not everyone agrees with this concept of progress. It depends on an Enlightement worldview that includes confidence in reason and science, and a humanist morality. One argument against the idea of progress claims that material progress has not actually led to human well-being. Perhaps the benefits of progress are outweighed by the costs and risks: health hazards, technological unemployment, environmental damage, existential threats, etc. Some downplay or deny the benefits themselves, arguing that material progress doesn’t increase happiness (owing to the hedonic treadmill), that it doesn’t satisfy our spiritual values, or that it degrades our moral character. Rousseau famously asserted that “the progress of the sciences and the arts has added nothing to our true happiness” and that “our souls have become corrupted to the extent that our sciences and our arts have advanced towards perfection.” Others, as mentioned above, argue for a different standard of value altogether, such as nature or God. (Often these arguments contain some equivocation between whether these things are good in themselves, or whether we should value them because they are good for human well-being over the long term.) When people start to conclude that progress is not in fact good, they talk about this as no longer “believing in progress.” Historian Carl Becker, writing in the shadow of World War I, said that “the fact of progress is disputed and the doctrine discredited,” and asked: “May we still, in whatever different fashion, believe in the progress of mankind?” In 1991, Christopher Lasch asked: How does it happen that serious people continue to believe in progress, in the face of massive evidence that might have been expected to refute the idea of progress once and for all? Those who dispute the idea of progress often avoid the term, or quarantine it in scare quotes: so-called “progress.” When Jeremy Caradonna questioned the concept in The Atlantic, the headline was: “Is ‘Progress’ Good for Humanity?” One of the first court rulings on environmental protection law, in 1971, said that such law represented “the commitment of the Government to control, at long last, the destructive engine of material ‘progress.’” Or consider this from Guns, Germs, and Steel: … I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in human happiness. The idea of progress is inherently an idea that progress, overall, is good. If “progress” is destructive, if it does not in fact improve human well-being, then it hardly deserves the name. Contrast this with the concept of growth. “Growth,” writ large, refers to an increase in the population, the economy, and the scale of human organization and activity. It is not inherently good: everyone agrees that it is happening, but some are against it; some even define themselves by being against it (the “degrowth” movement). No one is against progress, they are only against “progress”: that is, they either believe in it, or deny it. The most important question in the philosophy of progress, then, is whether the idea of progress is valid—whether “progress” is real. “Progress” in the 19th century Before the World Wars, there was an idea of progress that went even beyond what I have defined above, and which contained at least two major errors. One error was the idea that progress is inevitable. Becker, in the essay quoted above, said that according to “the doctrine of progress,” the Idea or the Dialectic or Natural Law, functioning through the conscious purposes or the unconscious activities of men, could be counted on to safeguard mankind against future hazards. … At the present moment the world seems indeed out of joint, and it is difficult to believe with any conviction that a power not ourselves … will ever set it right. (Emphasis added.) The other was the idea that moral progress was so closely connected to material progress that they would always move together. Condorcet believed that prosperity would “naturally dispose men to humanity, to benevolence and to justice,” and that “nature has connected, by a chain which cannot be broken, truth, happiness, and virtue.” The 20th century, with the outbreak of world war and the rise of totalitarianism, proved these ideas disastrously wrong. “Progress” in the 21st century and beyond To move forward, we need a wiser, more mature idea of progress. Progress is not automatic or inevitable. It depends on choice and effort. It is up to us. Progress is not automatically good. It must be steered. Progress always creates new problems, and they don’t get solved automatically. Solving them requires active focus and effort, and this is a part of progress, too. Material progress does not automatically lead to moral progress. Technology within an evil social system can do more harm than good. We must commit to improving morality and society along with science, technology, and industry. With these lessons well learned, we can rescue the idea of progress and carry it forward into the 21st century and beyond.
What is the ideal size of the human population? One common answer is “much smaller.” Paul Ehrlich, co-author of The Population Bomb (1968), has as recently as 2018 promoted the idea that “the world’s optimum population is less than two billion people,” a reduction of the current population by about 75%. And Ehrlich is a piker compared to Jane Goodall, who said that many of our problems would go away “if there was the size of population that there was 500 years ago”—that is, around 500 million people, a reduction of over 90%. This is a static ideal of a “sustainable” population. Regular readers of this blog can cite many objections to this view. Resources are not static. Historically, as we run out of a resource (whale oil, elephant tusks, seabird guano), we transition to a new technology based on a more abundant resource—and there are basically no major examples of catastrophic resource shortages in the industrial age. The carrying capacity of the planet is not fixed, but a function of technology; and side effects such as pollution or climate change are just more problems to be solved. As long as we can keep coming up with new ideas, growth can continue. But those are only reasons why a larger population is not a problem. Is there a positive reason to want a larger population? I’m going to argue yes—that the ideal human population is not “much smaller,” but “ever larger.” Selfish reasons to want more humans Let me get one thing out of the way up front. One argument for a larger population is based on utilitarianism, specifically the version of it that says that what is good is the sum total of happiness across all humans. If each additional life adds to the cosmic scoreboard of goodness, then it’s obviously better to have more people (unless they are so miserable that their lives are literally not worth living). I’m not going to argue from this premise, in part because I don’t need to and more importantly because I don’t buy it myself. (Among other things, it leads to paradoxes such as the idea that a population of thriving, extremely happy people is not as good as a sufficiently-larger population of people who are just barely happy.) Instead, I’m going to argue that a larger population is better for every individual—that there are selfish reasons to want more humans. First I’ll give some examples of how this is true, and then I’ll draw out some of the deeper reasons for it. More geniuses First, more people means more outliers—more super-intelligent, super-creative, or super-talented people, to produce great art, architecture, music, philosophy, science, and inventions. If genius is defined as one-in-a-million level intelligence, then every billion people means another thousand geniuses—to work on all of the problems and opportunities of humanity, to the benefit of all. More progress A larger population means faster scientific, technical, and economic progress, for several reasons: Total investment. More people means more total R&D: more researchers, and more surplus wealth to invest in it. Specialization. In the economy generally, the division of labor increases productivity, as each worker can specialize and become expert at their craft (“Smithian growth”). In R&D, each researcher can specialize in their field. Larger markets support more R&D investment, which lets companies pick off higher-hanging fruit. I’ve given the example of the threshing machine: it was difficult enough to manufacture that it didn’t pay for a local artisan to make them only for their town, but it was profitable to serve a regional market. Alex Tabarrok gives the example of the market for cancer drugs expanding as large countries such as India and China become wealthier. Very high production-value entertainment, such as movies, TV, and games, are possible only because they have mass audiences. More ambitious projects need a certain critical mass of resources behind them. Ancient Egyptian civilization built a large irrigation system to make the best use of the Nile floodwaters for agriculture, a feat that would not have been possible to a small tribe or chiefdom. The Apollo Program, at its peak in the 1960s, took over 4% of the US federal budget, but 4% would not have been enough if the population and the economy were half the size. If someday humanity takes on a grand project such as a space elevator or a Dyson sphere, it will require an enormous team and an enormous wealth surplus to fund them. In fact, these factors may represent not only opportunities but requirements for progress. There is evidence that simply to maintain a constant rate of exponential economic growth requires exponentially growing investment in R&D. This investment is partly financial capital, but also partly human capital—that is, we need an exponentially growing base of researchers. One way to understand this is that if each researcher can push forward a constant “surface area” of the frontier, then as the frontier expands, a larger number of researchers is needed to keep pushing all of it forward. Two hundred years ago, a small number of scientists were enough to investigate electrical and magnetic phenomena; today, millions of scientists and engineers are productively employed working out all of the details and implications of those phenomena, both in the lab and in the electrical, electronics, and computer hardware and software industries. But it’s not even clear that each researcher can push forward a constant surface area of the frontier. As that frontier moves further out, the “burden of knowledge” grows: each researcher now has to study and learn more in order to even get to the frontier. Doing so might force them to specialize even further. Newton could make major contributions to fields as diverse as gravitation and optics, because the very basics of those fields were still being figured out; today, a researcher might devote their whole career to a sub-sub-discipline such as nuclear astrophysics. But in the long run, an exponentially growing base of researchers is impossible without an exponentially growing population. In fact, in some models of economic growth, the long-run growth rate in per-capita GDP is directly proportional to the growth rate of the population. More options Even setting aside growth and progress—looking at a static snapshot of a society—a world with more people is a world with more choices, among greater variety: Better matching for aesthetics, style, and taste. A bigger society has more cuisines, more architectural styles, more types of fashion, more sub-genres of entertainment. This also improves as the world gets more connected: for instance, the wide variety of ethnic restaurants in every major city is a recent phenomenon; it was only decades ago that pizza, to Americans, was an unfamiliar foreign cuisine. Better matching to careers. A bigger economy has more options for what to do with your life. In a hunter-gatherer society, you are lucky if you get to decide whether to be a hunter or a gatherer. In an agricultural economy, you’re probably going to be a farmer, or maybe some sort of artisan. Today there’s a much wider set of choices, from pilot to spreadsheet jockey to lab technician. Better matching to other people. A bigger world gives you a greater chance to find the perfect partner for you: the best co-founder for your business, the best lyricist for your songs, the best partner in marriage. More niche communities. Whatever your quirky interest, worldview, or aesthetic—the more people you can be in touch with, the more likely you are to find others like you. Even if you’re one in a million, in a city of ten million people, there are enough of you for a small club. In a world of eight billion, there are enough of you for a thriving subreddit. More niche markets. Similarly, in a larger, more connected economy, there are more people to economically support your quirky interests. Your favorite Etsy or Patreon creator can find the “one thousand true fans” they need to make a living. Deeper patterns When I look at the above, here are some of the underlying reasons: The existence of non-rival goods. Rival goods need to be divided up; more people just create more competition for them. But non-rival goods can be shared by all. A larger population and economy, all else being equal, will produce more non-rival goods, which benefits everyone. Economies of scale. In particular, often total costs are a combination of fixed and variable costs. The more output, the more the fixed costs can be amortized, lowering average cost. Network effects and Metcalfe’s law. Value in a network is generated not by nodes but by connections, and the more nodes there are total, the more connections are possible per node. Metcalfe’s law quantifies this: the number of possible connections in a network is proportional to the square of the number of nodes. All of these create agglomeration effects: bigger societies are better for everyone. A dynamic world I assume that when Ehrlich and Goodall advocate for much smaller populations, they aren’t literally calling for genocide or hoping for a global catastrophe (although Ehrlich is happy with coercive fertility control programs, and other anti-humanists have expressed hope for “the right virus to come along”). Even so, the world they advocate is a greatly impoverished and stagnant one: a world with fewer discoveries, fewer inventions, fewer works of creative genius, fewer cures for fewer diseases, fewer choices, fewer soulmates. A world with a large and growing population is a dynamic world that can create and sustain progress. For a different angle on the same thesis, see “Forget About Overpopulation, Soon There Will Be Too Few Humans,” by Roots of Progress fellow Maarten Boudry.
When Galileo wanted to study the heavens through his telescope, he got money from those legendary patrons of the Renaissance, the Medici. To win their favor, when he discovered the moons of Jupiter, he named them the Medicean Stars. Other scientists and inventors offered flashy gifts, such as Cornelis Drebbel’s perpetuum mobile (a sort of astronomical clock) given to King James, who made Drebbel court engineer in return. The other way to do research in those days was to be independently wealthy: the Victorian model of the gentleman scientist. Meisterdrucke Eventually we decided that requiring researchers to seek wealthy patrons or have independent means was not the best way to do science. Today, researchers, in their role as “principal investigators” (PIs), apply to science funders for grants. In the US, the NIH spends nearly $48B annually, and the NSF over $11B, mainly to give such grants. Compared to the Renaissance, it is a rational, objective, democratic system. However, I have come to believe that this principal investigator model is deeply broken and needs to be replaced. That was the thought at the top of my mind coming out of a working group on “Accelerating Science” hosted by the Santa Fe Institute a few months ago. (The thoughts in this essay were inspired by many of the participants, but I take responsibility for any opinions expressed here. My thinking on this was also influenced by a talk given by James Phillips at a previous metascience conference. My own talk at the workshop was written up here earlier.) What should we do instead of the PI model? Funding should go in a single block to a relatively large research organization of, say, hundreds of scientists. This is how some of the most effective, transformative labs in the world have been organized, from Bell Labs to the MRC Laboratory of Molecular Biology. It has been referred to as the “block funding” model. Here’s why I think this model works: Specialization A principal investigator has to play multiple roles. They have to do science (researcher), recruit and manage grad students or research assistants (manager), maintain a lab budget (administrator), and write grants (fundraiser). These are different roles, and not everyone has the skill or inclination to do them all. The university model adds teaching, a fifth role. The block organization allows for specialization: researchers can focus on research, managers can manage, and one leader can fundraise for the whole org. This allows each person to do what they are best at and enjoy, and it frees researchers from spending 30–50% of their time writing grants, as is typical for PIs. I suspect it also creates more of an opportunity for leadership in research. Research leadership involves having a vision for an area to explore that will be highly fruitful—semiconductors, molecular biology, etc.—and then recruiting talent and resources to the cause. This seems more effective when done at the block level. Side note: the distinction I’m talking about here, between block funding and PI funding, doesn’t say anything about where the funding comes from or how those decisions are made. But today, researchers are often asked to serve on committees that evaluate grants. Making funding decisions is yet another role we add to researchers, and one that also deserves to be its own specialty (especially since having researchers evaluate their own competitors sets up an inherent conflict of interest). Research freedom and time horizons There’s nothing inherent to the PI grant model that dictates the size of the grant, the scope of activities it covers, the length of time it is for, or the degree of freedom it allows the researcher. But in practice, PI funding has evolved toward small grants for incremental work, with little freedom for the researcher to change their plans or strategy. I suspect the block funding model naturally lends itself to larger grants for longer time periods that are more at the vision level. When you’re funding a whole department, you’re funding a mission and placing trust in the leadership of the organization. Also, breakthroughs are unpredictable, but the more people you have working on things, the more regularly they will happen. A lab can justify itself more easily with regular achievements. In this way one person’s accomplishment provides cover to those who are still toiling away. Who evaluates researchers In the PI model, grant applications are evaluated by funding agencies: in effect, each researcher is evaluated by the external world. In the block model, a researcher is evaluated by their manager and their peers. James Phillips illustrates with a diagram: James Phillips A manager who knows the researcher well, who has been following their work closely, and who talks to them about it regularly, can simply make better judgments about who is doing good work and whose programs have potential. (And again, developing good judgment about researchers and their potential is a specialized role—see point 1). Further, when a researcher is evaluated impersonally by an external agency, they need to write up their work formally, which adds overhead to the process. They need to explain and justify their plans, which leads to more conservative proposals. They need to show outcomes regularly, which leads to more incremental work. And funding will disproportionately flow to people who are good at fundraising (which, again, deserves to be a specialized role). To get scientific breakthroughs, we want to allow talented, dedicated people to pursue hunches for long periods of time. This means we need to trust the process, long before we see the outcome. Several participants in the workshop echoed this theme of trust. Trust like that is much stronger when based on a working relationship, rather than simply on a grant proposal. If the block model is a superior alternative, how do we move towards it? I don’t have a blueprint. I doubt that existing labs will transform themselves into this model. But funders could signal their interest in funding labs like this, and new labs could be created or proposed on this model and seek such funding. I think the first step is spreading this idea.
In December, I went to the Foresight Institute’s Vision Weekend 2023 in San Francisco. I had a lot of fun talking to a bunch of weird and ambitious geeks about the glorious abundant technological future. Here are few things I learned about (with the caveat that this is mostly based on informal conversations with only basic fact-checking, not deep research): Cellular reprogramming Aging doesn’t only happen to your body: it happens at the level of individual cells. Over time, cells accumulate waste products and undergo epigenetic changes that are markers of aging. But wait—when a baby is born, it has young cells, even though it grew out of cells that were originally from its older parents. That is, the egg and sperm cells might be 20, 30, or 40 years old, but somehow when they turn into a baby, they get reset to biological age zero. This process is called “reprogramming,” and it happens soon after fertilization. It turns out that cell reprogramming can be induced by certain proteins, known as the Yamanaka factors, after their discoverer (who won a Nobel for this in 2012). Could we use those proteins to reprogram our own cells, making them youthful again? Maybe. There is a catch: the Yamanaka factors not only clear waste out of cells, they also reset them to become stem cells. You do not want to turn every cell in your body into a stem cell. You don’t even want to turn a small number of them into stem cells: it can give you cancer (which kind of defeats the purpose of a longevity technology). But there is good news: when you expose cells to the Yamanaka factors, the waste cleanup happens first, and the stem cell transformation happens later. If we can carefully time the exposure, maybe we can get the target effect without the damaging side effects. This is tricky: different tissues respond on different timelines, so you can’t apply the treatment uniformly over the body. There are a lot of details to be worked out here. But it’s an intriguing line of research for longevity, and it’s one of the avenues being explored at Retro Bio, among other places. Here’s a Derek Lowe article with more info and references. The BFG orbital launch system If we’re ever going to have a space economy, it has to be a lot cheaper to launch things into space. Space Shuttle launches cost over $65,000/kg, and even the Falcon Heavy costs $1500/kg. Compare to shipping costs on Earth, which are only a few dollars per kilogram. A big part of the high launch cost in traditional systems is the rocket, which is discarded with each launch. SpaceX is bringing costs down by making reusable rockets that land gently rather than crashing into the ocean, and by making very big rockets for economies of scale (Elon Musk has speculated that Starship could bring costs as low as $10/kg, although this is a ways off, since right now fuel costs alone are close to that amount). But what if we didn’t need a rocket at all? Rockets are pretty much our only option for propulsion in space, but what if we could give most of the impulse to the payload on Earth? J. Storrs Hall has proposed the “space pier,” a runway 300 km long mounted atop towers 100 km tall. The payload takes an elevator 100 km up to the top of the tower, thus exiting the atmosphere and much of Earth’s gravity well. Then a linear induction motor accelerates it into orbit along the 300 km track. You could do this with a mere 10 Gs of acceleration, which is survivable by human passengers. Think of it like a Big Friendly Giant (BFG) picking up your payload and then throwing it into orbit. Hall estimates that this could bring launch costs down to $10/kg, if the pier could be built for a mere $10 billion. The only tiny little catch with the space pier is that there is no technology in existence that could build it, and no construction material that a 100 km tower could be made of. Hall suggests that with “mature nanotechnology” we could build the towers out of diamond. OK. So, probably not going to happen this decade. What can we do now, with today’s technology? Let’s drop the idea of using this for human passengers and just consider relatively durable freight. Now we can use much higher G-forces, which means we don’t need anything close to 300 km of distance to accelerate over. And, does it really have to be 100 km tall? Yes, it’s nice to start with an altitude advantage, and with no atmosphere, but both of those problems can be overcome with sufficient initial velocity. At this point we’re basically just talking about an enormous cannon (a very different kind of BFG). This is what Longshot Space is doing. Build a big long tube in the desert. Put the payload in it, seal the end with a thin membrane, and pump the air out to create a vacuum. Then rapidly release some compressed gasses behind the payload, which bursts through the membrane and exits the tube at Mach 25. One challenge with this is that a gas can only expand as fast as the speed of sound in that gas. In air this is, of course, a lot less than Mach 25. One thing that helps is to use a lighter gas, in which the speed of sound is higher, such as helium or (for the very brave) hydrogen. Another part of the solution is to give the payload a long, wedge-shaped tail. The expanding gasses push sideways on this tail, which through the magic of simple machines translates into a much faster push forwards. There’s a brief discussion and illustration of the pneumatics in this video. Now, if you are trying to envision “big long tube in the desert”, you might be wondering: is the tube angled upwards or something? No. It is basically lying flat on the ground. It is expensive to build a long straight thing that points up: you have to dig a deep hole and/or build a tall tower. What about putting it on the side of a mountain, which naturally points up? Building things on mountains is also hard; in addition, mountains are special and nobody wants to give you one. It’s much easier to haul lots of materials into the middle of the desert; also there is lots of room out there and the real estate is cheap. Next you might be wondering: if the tube is horizontal, isn’t it pointed in the wrong direction to get to space? I thought space was up? Well, yes. There are a few things going on here. One is that if you travel far enough in a straight line, the Earth will curve away from you and you will eventually find yourself in space. Another is that if you shape the projectile such that its center of pressure is in the right place relative to its center of mass, then it will naturally angle upward when it hits the atmosphere. Lastly, if you are trying to get into orbit, most of the velocity you need is actually horizontal anyway. In fact, if and when you reach a circular orbit, you will find that all of your velocity is horizontal. This means that there is no way to get into orbit purely ballistically, with a single impulse imparted from Earth. Any satellite, for instance, launched via this system will need its own rocket propulsion in order to circularize the orbit once it reaches altitude (even leaving aside continual orbital adjustments during its service lifetime). But we’re now talking about a relatively small rocket with a small amount of fuel, not the big multi-stage things that you need to blast off from the surface. And presumably someday we will be delivering food, fuel, tools, etc. to space in packages that just need to be caught by whoever is receiving them. Longshot estimates that this system, like Starship or the space pier, could get launch costs down to about $10/kg. This might be cheap enough that launch prices could be zero, subsidized by contracts to buy fuel or maintenance, in a space-age version of “give away the razor and sell the blades.” Not only would this business model help grow the space economy, it would also prove wrong all the economists who have been telling us for decades that “there’s no such thing as a free launch.” Mars could be terraformed in our lifetimes Terraforming a planet sounds like a geological process, and so I had sort of thought that it would require geological timescales, or if it could really be accelerated, at least a matter of centuries or so. You drop off some algae or something on a rocky planet, and then your distant descendants return one day to find a verdant paradise. So I was surprised to learn that major changes on Mars could, in principle, be made on a schedule much shorter than a single human lifespan. Let’s back up. Mars is a real fixer-upper of a planet. Its temperature varies widely, averaging about −60º C; its atmosphere is thin and mostly carbon dioxide. This severely depresses its real estate values. Suppose we wanted to start by significantly warming the planet. How do you do that? Let’s assume Mars’s orbit cannot be changed—I mean, we’re going to get in enough trouble with the Sierra Club as it is—so the total flux of solar energy reaching the planet is constant. What we can do is to trap a bit more of that energy on the planet, and prevent it from radiating out into space. In other words, we need to enhance Mars’s greenhouse effect. And the way to do that is to give it a greenhouse gas. Wait, we just said that Mars’s atmosphere is mostly CO2, which is a notorious greenhouse gas, so why isn’t Mars warm already? It’s just not enough: the atmosphere is very thin (less than 1% of the pressure of Earth’s atmosphere), and what CO2 there is only provides about 5º of warming. We’re going to need to add more GHG. What could it be? Well, for starters, given the volumes required, it should be composed of elements that already exist on Mars. With the ingredients we have, what can we make? Could we get more CO2 in the atmosphere? There is more CO2 on/under the surface, in frozen form, but even that is not enough for the task. We need something else. What about CFCs? As a greenhouse gas, they are about four orders of magnitude more efficient than CO2, so we’d need a lot less of them. However, they require fluorine, which is very rare in the Martian soil, and we’d still need about 100 gigatons of it. This is not encouraging. One thing Mars does have a good amount of is metal, such as iron, aluminum, and magnesium. Now metals, you might be thinking, are not generally known as greenhouse gases. But small particles of conductive metal, with the right size and shape, can act as one. A recent paper found through simulation that “nanorods” about 9 microns long, half the wavelength of the infrared thermal radiation given off by a planet, would scatter that radiation back to the surface (Ansari, Kite, Ramirez, Steele, and Mohseni, “Warming Mars with artificial aerosol appears to be feasible”—no preprint online, but this poster seems to represent earlier work). Suppose we aim to warm the planet by about 30º C, enough to melt surface water in the polar regions during the summer, and bring Mars much closer to Earth temperatures. AKRSM’s simulation says that we would need to put about 400 mg/m3 of nanorods into the Martian sky, an efficiency (in warming per unit mass) more than 2000x greater than previously proposed methods. The particles would settle out of the atmosphere slowly, at less than 1/100 the rate of natural Mars dust, so only about 30 liters/sec of them would need to be released continuously. If we used iron, this would require mining a million cubic meters of iron per year—quite a lot, but less than 1% of what we do on Earth. And the particles, like other Martian dust, would be lifted high in the atmosphere by updrafts, so they could be conveniently released from close to the surface. Wouldn’t metal nanoparticles be potentially hazardous to breathe? Yes, but this is already a problem from Mars’s naturally dusty atmosphere, and the nanorods wouldn’t make it significantly worse. (However, this will have to be solved somehow if we’re going to make Mars habitable.) Kite told me that if we started now, given the capabilities of Starship, we could achieve the warming in a mere twenty years. Most of that time is just getting equipment to Mars, mining the iron, manufacturing the nanorods, and then waiting about a year for Martian winds to mix them throughout the atmosphere. Since Mars has no oceans to provide thermal inertia, the actual warming after that point only takes about a month. Kite is interested in talking to people about the design of a the nanorod factory. He wants to get a size/weight/power estimate and an outline design for the factory, to make an initial estimate of how many Starship landings would be needed. Contact him at edwin.kite@gmail.com. I have not yet gotten Kite and Longshot together to figure out if we can shoot the equipment directly to Mars using one really enormous space cannon. Thanks to Reason, Mike Grace, and Edwin Kite for conversations and for commenting on a draft of this essay. Any errors or omissions above are entirely my own.
More in science
A brief proposal to fix Social Security and grow the population
[Note that this article is a transcript of the video embedded above.] Even though it’s a favorite vacation destination, the beach is surprisingly dangerous. Consider the lifeguard: There aren’t that many recreational activities in our lives that have explicit staff whose only job is to keep an eye on us, make sure we stay safe, and rescue us if we get into trouble. There are just a lot of hazards on the beach. Heavy waves, rip currents, heat stress, sunburn, jellyfish stings, sharks, and even algae can threaten the safety of beachgoers. But there’s a whole other hazard, this one usually self-inflicted, that usually doesn’t make the list of warnings, even though it takes, on average, 2-3 lives per year just in the United States. If you know me, you know I would never discourage that act of playing with soil and sand. It’s basically what I was put on this earth to do. But I do have one exception. Because just about every year, the news reports that someone was buried when a hole they dug collapsed on top of them. There’s no central database of sandhole collapse incidents, but from the numbers we do have, about twice as many people die this way than from shark attacks in the US. It might seem like common sense not to dig a big, unsupported hole at the beach and then go inside it, but sand has some really interesting geotechnical properties that can provide a false sense of security. So, let’s use some engineering and garage demonstrations to explain why. I’m Grady and this is Practical Engineering. In some ways, geotechnical engineering might as well be called slope engineering, because it’s a huge part of what they do. So many aspects of our built environment rely on the stability of sloped earth. Many dams are built from soil or rock fill using embankments. Roads, highways, and bridges rely on embankments to ascend or descend smoothly. Excavations for foundations, tunnels, and other structures have to be stable for the people working inside. Mines carefully monitor slopes to make sure their workers are safe. Even protecting against natural hazards like landslides requires a strong understanding of geotechnical engineering. Because of all that, the science of slope stability is really deeply understood. There’s a well-developed professional consensus around the science of soil, how it behaves, and how to design around its limitations as a construction material. And I think a peek into that world will really help us understand this hazard of digging holes on the beach. Like many parts of engineering, analyzing the stability of a slope has two basic parts: the strengths and the loads. The job of a geotechnical engineer is to compare the two. The load, in this case, is kind of obvious: it’s just the weight of the soil itself. We can complicate that a bit by adding loads at the top of a slope, called surcharges, and no doubt surcharge loads have contributed to at least a few of these dangerous collapses from people standing at the edge of a hole. But for now, let’s keep it simple with just the soil’s own weight. On a flat surface, soils are generally stable. But when you introduce a slope, the weight of the soil above can create a shear failure. These failures often happen along a circular arc, because an arc minimizes the resisting forces in the soil while maximizing the driving forces. We can manually solve for the shear forces at any point in a soil mass, but that would be a fairly tedious engineering exercise, so most slope stability analyses use software. One of the simplest methods is just to let the software draw hundreds of circular arcs that represent failure planes, compute the stresses along each plane based on the weight of the soil, and then figure out if the strength of the soil is enough to withstand the stress. But what does it really mean for a soil to have strength? If you can imagine a sample of soil floating in space, and you apply a shear stress, those particles are going to slide apart from each other in the direction of the stress. The amount of force required to do it is usually expressed as an angle, and I can show you why. You may have done this simple experiment in high school physics where you drag a block along a flat surface and measure the force required to overcome the friction. If you add weight, you increase the force between the surfaces, called the normal force, which creates additional friction. The same is true with soils. The harder you press the particles of soil together, the better they are at resisting a shear force. In a simplified force diagram, we can draw a normal force and the resulting friction, or shear strength, that results. And the angle that hypotenuse makes with the normal force is what we call the friction angle. Under certain conditions, it’s equal to the angle of repose, the steepest angle that a soil will naturally stand. If I let sand pour out of this funnel onto the table, you can see, even as the pile gets higher, the angle of the slope of the sides never really changes. And this illustrates the complexity of slope stability really nicely. Gravity is what holds the particles together, creating friction, but it’s also what pulls them apart. And the angle of repose is kind of a line between gravity’s stabilizing and destabilizing effects on the soil. But things get more complicated when you add water to the mix. Soil particles, like all things that take up space, have buoyancy. Just like lifting a weight under water is easier, soil particles seem to weigh less when they’re saturated, so they have less friction between them. I can demonstrate this pretty easily by just moving my angle of repose setup to a water tank. It’s a subtle difference, but the angle of repose has gone down underwater. It’s just because the particle’s effective weight goes down, so the shear strength of the soil mass goes down too. And this doesn’t just happen under lakes and oceans. Soil holds water - I’ve covered a lot of topics on groundwater if you want to learn more. There’s this concept of the “water table” below which, the soils are saturated, and they behave in the same way as my little demonstration. The water between the particles, called “pore water” exerts pressure, pushing them away from one another and reducing the friction between them. Shear strength usually goes down for saturated soils. But, if you’ve played with sand, you might be thinking: “This doesn’t really track with my intuitions.” When you build a sand castle, you know, the dry sand falls apart, and the wet sand holds together. So let’s dive a little deeper. Friction actually isn’t the only factor that contributes to shear strength in a soil. For example, I can try to shear this clay, and there’s some resistance there, even though there is no confining force pushing the particles together. In finer-grained soils like clay, the particles themselves have molecular-level attractions that make them, basically, sticky. The geotechnical engineers call this cohesion. And it’s where sand gets a little sneaky. Water pressure in the pores between particles can push them away from each other, but it can also do the opposite. In this demo, I have some dry sand in a container with a riser pipe to show the water table connected to the side. And I’ve dyed my water black to make it easier to see. When I pour the water into the riser, what do you think is going to happen? Will the water table in the soil be higher, lower, or exactly the same as the level in the riser? Let’s try it out. Pretty much right away, you can see what happens. The sand essentially sucks the water out of the riser, lifting it higher than the level outside the sand. If I let this settle out for a while, you can see that there’s a pretty big difference in levels, and this is largely due to capillary action. Just like a paper towel, water wicks up into the sand against the force of gravity. This capillary action actually creates negative pressure within the soil (compared to the ambient air pressure). In other words, it pulls the particles against each other, increasing the strength of the soil. It basically gives the sand cohesion, additional shear strength that doesn’t require any confining pressure. And again, if you’ve played with sand, you know there’s a sweet spot when it comes to water. Too dry, and it won’t hold together. Too wet, same thing. But if there’s just enough water, you get this strengthening effect. However, unlike clay that has real cohesion, that suction pressure can be temporary. And it’s not the only factor that makes sand tricky. The shear strength of sand also depends on how well-packed those particles are. Beach sand is usually well-consolidated because of the constant crashing waves. Let’s zoom in on that a bit. If the particles are packed together, they essentially lock together. You can see that to shear them apart doesn’t just look like a sliding motion, but also a slight expansion in volume. Engineers call this dilatancy, and you don’t need a microscope to see it. In fact, you’ve probably noticed this walking around on the beach, especially when the water table is close to the surface. Even a small amount of movement causes the sand to expand, and it’s easy to see like this because it expands above the surface of the water. The practical result of this dilatant property is that sand gets stronger as it moves, but only up to a point. Once the sand expands enough that the particles are no longer interlocked together, there’s a lot less friction between them. If you plot movement, called strain, against shear strength, you get a peak and then a sudden loss of strength. Hopefully you’re starting to see how all this material science adds up to a real problem. The shear strength of a soil, basically its ability to avoid collapse, is not an inherent property: It depends on a lot of factors; It can change pretty quickly; And this behavior is not really intuitive. Most of us don’t have a ton of experience with excavations. That’s part of the reason it’s so fun to go on the beach and dig a hole in the first place. We just don’t get to excavate that much in our everyday lives. So, at least for a lot of us, it’s just a natural instinct to do some recreational digging. You excavate a small hole. It’s fun. It’s interesting. The wet sand is holding up around the edges, so you dig deeper. Some people give up after the novelty wears off. Some get their friends or their kids involved to keep going. Eventually, the hole gets big enough that you have to get inside it to keep digging. With the suction pressure from the water and the shear strengthening through dilatancy, the walls have been holding the entire time, so there’s no reason to assume that they won’t just keep holding. But inside the surrounding sand, things are changing. Sand is permeable to water, meaning water moves through it pretty freely. It doesn’t take a big change to upset that delicate balance of wetness that gives sand its stability. The tide could be going out, lowering the water table and thus drying the soil at the surface out. Alternatively, a wave or the tide could add water to the surface sand, reducing the suction pressure. At the same time, tiny movements within the slopes are strengthening the sand as it tries to dilate in volume. But each little movement pushes toward that peak strength, after which it suddenly goes away. We call this a brittle failure because there’s little deformation to warn you that there’s going to be a collapse. It happens suddenly, and if you happen to be inside a deep hole when it does, you might be just fine, like our little friend here, but if a bigger section of the wall collapses, your chance of surviving is slim. Soil is heavy. Sand has about two-and-a-half times the density of water. It just doesn’t take that much of it to trap a person. This is not just something that happens to people on vacations, by the way. Collapsing trenches and excavations are one of the most common causes of fatal construction incidents. In fact, if you live in a country with workplace health and safety laws, it’s pretty much guaranteed that within those laws are rules about working in trenches and excavations. In the US, OSHA has a detailed set of guidelines on how to stay safe when working at the bottom of a hole, including how steep slopes can be depending on the types of soil, and the devices used to shore up an excavation to keep it from collapsing while people are inside. And for certain circumstances where the risks get high enough or the excavation doesn’t fit neatly into these simplified categories, they require a professional engineer be involved. So does all this mean that anyone who’s not an engineer just shouldn’t dig holes at the beach. If you know me, you know I would never agree with that. I don’t want to come off too earnest here, but we learn through interaction. Soil and rock mechanics are incredibly important to every part of the built environment, and I think everyone should have a chance to play with sand, to get muddy and dirty, to engage and connect and commune with the stuff on which everything gets built. So, by all means, dig holes at the beach. Just don’t dig them so deep. The typical recommendation I see is to avoid going in a hole deeper than your knees. That’s pretty conservative. If you have kids with you, it’s really not much at all. If you want to follow OSHA guidelines, you can go a little bigger: up to 20 feet (or 6 meters) in depth, as long as you slope the sides of your hole by one-and-a-half to one or about 34 degrees above horizontal. You know, ultimately you have to decide what’s safe for you and your family. My point is that this doesn’t have to be a hazard if you use a little engineering prudence. And I hope understanding some of the sneaky behaviors of beach sand can help you delight in the primitive joy of digging a big hole without putting your life at risk in the process.
USAID has been slashed, and it is unclear what shape its predecessor will take. How might American foreign assistance be restructured to maintain critical functions? And how should we think about its future?
Life in the state of nature was less violent than you might think. But this made them vulnerable to a few psychopaths.