More from The Roots of Progress | Articles
In one sense, the concept of progress is simple, straightforward, and uncontroversial. In another sense, it contains an entire worldview. The most basic meaning of “progress” is simply advancement along a path, or more generally from one state to another that is considered more advanced by some standard. (In this sense, progress can be good, neutral, or even bad—e.g., the progress of a disease.) The question is always: advancement along what path, in what direction, by what standard? Types of progress “Scientific progress,” “technological progress,” and “economic progress” are relatively straightforward. They are hard to measure, they are multi-dimensional, and we might argue about specific examples—but in general, scientific progress consists of more knowledge, better theories and explanations, a deeper understanding of the universe; technological progress consists of more inventions that work better (more powerfully or reliably or efficiently) and enable us to do more things; economic progress consists of more production, infrastructure, and wealth. Together, we can call these “material progress”: improvements in our ability to comprehend and to command the material world. Combined with more intangible advances in the level of social organization—institutions, corporations, bureaucracy—these constitute “progress in capabilities”: that is, our ability to do whatever it is we decide on. True progress But this form of progress is not an end in itself. True progress is advancement toward the good, toward ultimate values—call this “ultimate progress,” or “progress in outcomes.” Defining this depends on axiology; that is, on our theory of value. To a humanist, ultimate progress means progress in human well-being: “human progress.” Not everyone agrees on what constitutes well-being, but it certainly includes health, happiness, and life satisfaction. In my opinion, human well-being is not purely material, and not purely hedonic: it also includes “spiritual” values such as knowledge, beauty, love, adventure, and purpose. The humanist also sees other kinds of progress contributing to human well-being: “moral progress,” such as the decline of violence, the elimination of slavery, and the spread of equal rights for all races and sexes; and more broadly “social progress,” such as the evolution from monarchy to representative democracy, or the spread of education and especially literacy. Others have different standards. Biologist David Graber called himself a “biocentrist,” by which he meant … those of us who value wildness for its own sake, not for what value it confers upon mankind. … We are not interested in the utility of a particular species, or free-flowing river, or ecosystem, to mankind. They have intrinsic value, more value—to me—than another human body, or a billion of them. … Human happiness, and certainly human fecundity, are not as important as a wild and healthy planet. By this standard, virtually all human activity is antithetical to progress: Graber called humans “a cancer… a plague upon ourselves and upon the Earth.” Or for another example, one Lutheran stated that his “primary measure of the goodness of a society is the population share which is a baptized Christian and regularly attending church.” The idea of progress isn’t completely incompatible with some flavors of environmentalism or of religion (and there are both Christians and environmentalists in the progress movement!) but these examples show that it is possible to focus on a non-human standard, such as God or Nature, to the point where human health and happiness become irrelevant or even diametrically opposed to “progress.” Unqualified progress What are we talking about when we refer to “progress” unqualified, as in “the progress of mankind” or “the roots of progress”? “Progress” in this sense is the concept of material progress, social progress, and human progress as a unified whole. It is based on the premise that progress in capabilities really does on the whole lead to progress in outcomes. This doesn’t mean that all aspects of progress move in lockstep—they don’t. It means that all aspects of progress support each other and over the long term depend on each other; they are intertwined and ultimately inseparable. Consider, for instance, how Patrick Collison and Tyler Cowen defined the term in their article calling for “progress studies”: By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. David Deutsch, in The Beginning of Infinity, is even more explicit, saying that progress includes “improvements not only in scientific understanding, but also in technology, political institutions, moral values, art, and every aspect of human welfare.” Skepticism of this idea of progress is sometimes expressed as: “progress towards what?” The undertone of this question is: “in your focus on material progress, you have lost sight of social and/or human progress.” On the premise that different forms of progress are diverging and even coming into opposition, this is an urgent challenge; on the premise that progress a unified whole, it is a valuable intellectual question but not a major dilemma. Historical progress “Progress” is also an interpretation of history according to which all these forms of progress have, by and large, been happening. In this sense, the study of “progress” is the intersection of axiology and history: given a standard of value, are things getting better? In Steven Pinker’s book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, the bulk of the chapters are devoted to documenting this history. Many of the charts in that book were sourced from Our World in Data, which also emphasizes the historical reality of progress. So-called “progress” Not everyone agrees with this concept of progress. It depends on an Enlightement worldview that includes confidence in reason and science, and a humanist morality. One argument against the idea of progress claims that material progress has not actually led to human well-being. Perhaps the benefits of progress are outweighed by the costs and risks: health hazards, technological unemployment, environmental damage, existential threats, etc. Some downplay or deny the benefits themselves, arguing that material progress doesn’t increase happiness (owing to the hedonic treadmill), that it doesn’t satisfy our spiritual values, or that it degrades our moral character. Rousseau famously asserted that “the progress of the sciences and the arts has added nothing to our true happiness” and that “our souls have become corrupted to the extent that our sciences and our arts have advanced towards perfection.” Others, as mentioned above, argue for a different standard of value altogether, such as nature or God. (Often these arguments contain some equivocation between whether these things are good in themselves, or whether we should value them because they are good for human well-being over the long term.) When people start to conclude that progress is not in fact good, they talk about this as no longer “believing in progress.” Historian Carl Becker, writing in the shadow of World War I, said that “the fact of progress is disputed and the doctrine discredited,” and asked: “May we still, in whatever different fashion, believe in the progress of mankind?” In 1991, Christopher Lasch asked: How does it happen that serious people continue to believe in progress, in the face of massive evidence that might have been expected to refute the idea of progress once and for all? Those who dispute the idea of progress often avoid the term, or quarantine it in scare quotes: so-called “progress.” When Jeremy Caradonna questioned the concept in The Atlantic, the headline was: “Is ‘Progress’ Good for Humanity?” One of the first court rulings on environmental protection law, in 1971, said that such law represented “the commitment of the Government to control, at long last, the destructive engine of material ‘progress.’” Or consider this from Guns, Germs, and Steel: … I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in human happiness. The idea of progress is inherently an idea that progress, overall, is good. If “progress” is destructive, if it does not in fact improve human well-being, then it hardly deserves the name. Contrast this with the concept of growth. “Growth,” writ large, refers to an increase in the population, the economy, and the scale of human organization and activity. It is not inherently good: everyone agrees that it is happening, but some are against it; some even define themselves by being against it (the “degrowth” movement). No one is against progress, they are only against “progress”: that is, they either believe in it, or deny it. The most important question in the philosophy of progress, then, is whether the idea of progress is valid—whether “progress” is real. “Progress” in the 19th century Before the World Wars, there was an idea of progress that went even beyond what I have defined above, and which contained at least two major errors. One error was the idea that progress is inevitable. Becker, in the essay quoted above, said that according to “the doctrine of progress,” the Idea or the Dialectic or Natural Law, functioning through the conscious purposes or the unconscious activities of men, could be counted on to safeguard mankind against future hazards. … At the present moment the world seems indeed out of joint, and it is difficult to believe with any conviction that a power not ourselves … will ever set it right. (Emphasis added.) The other was the idea that moral progress was so closely connected to material progress that they would always move together. Condorcet believed that prosperity would “naturally dispose men to humanity, to benevolence and to justice,” and that “nature has connected, by a chain which cannot be broken, truth, happiness, and virtue.” The 20th century, with the outbreak of world war and the rise of totalitarianism, proved these ideas disastrously wrong. “Progress” in the 21st century and beyond To move forward, we need a wiser, more mature idea of progress. Progress is not automatic or inevitable. It depends on choice and effort. It is up to us. Progress is not automatically good. It must be steered. Progress always creates new problems, and they don’t get solved automatically. Solving them requires active focus and effort, and this is a part of progress, too. Material progress does not automatically lead to moral progress. Technology within an evil social system can do more harm than good. We must commit to improving morality and society along with science, technology, and industry. With these lessons well learned, we can rescue the idea of progress and carry it forward into the 21st century and beyond.
What is the ideal size of the human population? One common answer is “much smaller.” Paul Ehrlich, co-author of The Population Bomb (1968), has as recently as 2018 promoted the idea that “the world’s optimum population is less than two billion people,” a reduction of the current population by about 75%. And Ehrlich is a piker compared to Jane Goodall, who said that many of our problems would go away “if there was the size of population that there was 500 years ago”—that is, around 500 million people, a reduction of over 90%. This is a static ideal of a “sustainable” population. Regular readers of this blog can cite many objections to this view. Resources are not static. Historically, as we run out of a resource (whale oil, elephant tusks, seabird guano), we transition to a new technology based on a more abundant resource—and there are basically no major examples of catastrophic resource shortages in the industrial age. The carrying capacity of the planet is not fixed, but a function of technology; and side effects such as pollution or climate change are just more problems to be solved. As long as we can keep coming up with new ideas, growth can continue. But those are only reasons why a larger population is not a problem. Is there a positive reason to want a larger population? I’m going to argue yes—that the ideal human population is not “much smaller,” but “ever larger.” Selfish reasons to want more humans Let me get one thing out of the way up front. One argument for a larger population is based on utilitarianism, specifically the version of it that says that what is good is the sum total of happiness across all humans. If each additional life adds to the cosmic scoreboard of goodness, then it’s obviously better to have more people (unless they are so miserable that their lives are literally not worth living). I’m not going to argue from this premise, in part because I don’t need to and more importantly because I don’t buy it myself. (Among other things, it leads to paradoxes such as the idea that a population of thriving, extremely happy people is not as good as a sufficiently-larger population of people who are just barely happy.) Instead, I’m going to argue that a larger population is better for every individual—that there are selfish reasons to want more humans. First I’ll give some examples of how this is true, and then I’ll draw out some of the deeper reasons for it. More geniuses First, more people means more outliers—more super-intelligent, super-creative, or super-talented people, to produce great art, architecture, music, philosophy, science, and inventions. If genius is defined as one-in-a-million level intelligence, then every billion people means another thousand geniuses—to work on all of the problems and opportunities of humanity, to the benefit of all. More progress A larger population means faster scientific, technical, and economic progress, for several reasons: Total investment. More people means more total R&D: more researchers, and more surplus wealth to invest in it. Specialization. In the economy generally, the division of labor increases productivity, as each worker can specialize and become expert at their craft (“Smithian growth”). In R&D, each researcher can specialize in their field. Larger markets support more R&D investment, which lets companies pick off higher-hanging fruit. I’ve given the example of the threshing machine: it was difficult enough to manufacture that it didn’t pay for a local artisan to make them only for their town, but it was profitable to serve a regional market. Alex Tabarrok gives the example of the market for cancer drugs expanding as large countries such as India and China become wealthier. Very high production-value entertainment, such as movies, TV, and games, are possible only because they have mass audiences. More ambitious projects need a certain critical mass of resources behind them. Ancient Egyptian civilization built a large irrigation system to make the best use of the Nile floodwaters for agriculture, a feat that would not have been possible to a small tribe or chiefdom. The Apollo Program, at its peak in the 1960s, took over 4% of the US federal budget, but 4% would not have been enough if the population and the economy were half the size. If someday humanity takes on a grand project such as a space elevator or a Dyson sphere, it will require an enormous team and an enormous wealth surplus to fund them. In fact, these factors may represent not only opportunities but requirements for progress. There is evidence that simply to maintain a constant rate of exponential economic growth requires exponentially growing investment in R&D. This investment is partly financial capital, but also partly human capital—that is, we need an exponentially growing base of researchers. One way to understand this is that if each researcher can push forward a constant “surface area” of the frontier, then as the frontier expands, a larger number of researchers is needed to keep pushing all of it forward. Two hundred years ago, a small number of scientists were enough to investigate electrical and magnetic phenomena; today, millions of scientists and engineers are productively employed working out all of the details and implications of those phenomena, both in the lab and in the electrical, electronics, and computer hardware and software industries. But it’s not even clear that each researcher can push forward a constant surface area of the frontier. As that frontier moves further out, the “burden of knowledge” grows: each researcher now has to study and learn more in order to even get to the frontier. Doing so might force them to specialize even further. Newton could make major contributions to fields as diverse as gravitation and optics, because the very basics of those fields were still being figured out; today, a researcher might devote their whole career to a sub-sub-discipline such as nuclear astrophysics. But in the long run, an exponentially growing base of researchers is impossible without an exponentially growing population. In fact, in some models of economic growth, the long-run growth rate in per-capita GDP is directly proportional to the growth rate of the population. More options Even setting aside growth and progress—looking at a static snapshot of a society—a world with more people is a world with more choices, among greater variety: Better matching for aesthetics, style, and taste. A bigger society has more cuisines, more architectural styles, more types of fashion, more sub-genres of entertainment. This also improves as the world gets more connected: for instance, the wide variety of ethnic restaurants in every major city is a recent phenomenon; it was only decades ago that pizza, to Americans, was an unfamiliar foreign cuisine. Better matching to careers. A bigger economy has more options for what to do with your life. In a hunter-gatherer society, you are lucky if you get to decide whether to be a hunter or a gatherer. In an agricultural economy, you’re probably going to be a farmer, or maybe some sort of artisan. Today there’s a much wider set of choices, from pilot to spreadsheet jockey to lab technician. Better matching to other people. A bigger world gives you a greater chance to find the perfect partner for you: the best co-founder for your business, the best lyricist for your songs, the best partner in marriage. More niche communities. Whatever your quirky interest, worldview, or aesthetic—the more people you can be in touch with, the more likely you are to find others like you. Even if you’re one in a million, in a city of ten million people, there are enough of you for a small club. In a world of eight billion, there are enough of you for a thriving subreddit. More niche markets. Similarly, in a larger, more connected economy, there are more people to economically support your quirky interests. Your favorite Etsy or Patreon creator can find the “one thousand true fans” they need to make a living. Deeper patterns When I look at the above, here are some of the underlying reasons: The existence of non-rival goods. Rival goods need to be divided up; more people just create more competition for them. But non-rival goods can be shared by all. A larger population and economy, all else being equal, will produce more non-rival goods, which benefits everyone. Economies of scale. In particular, often total costs are a combination of fixed and variable costs. The more output, the more the fixed costs can be amortized, lowering average cost. Network effects and Metcalfe’s law. Value in a network is generated not by nodes but by connections, and the more nodes there are total, the more connections are possible per node. Metcalfe’s law quantifies this: the number of possible connections in a network is proportional to the square of the number of nodes. All of these create agglomeration effects: bigger societies are better for everyone. A dynamic world I assume that when Ehrlich and Goodall advocate for much smaller populations, they aren’t literally calling for genocide or hoping for a global catastrophe (although Ehrlich is happy with coercive fertility control programs, and other anti-humanists have expressed hope for “the right virus to come along”). Even so, the world they advocate is a greatly impoverished and stagnant one: a world with fewer discoveries, fewer inventions, fewer works of creative genius, fewer cures for fewer diseases, fewer choices, fewer soulmates. A world with a large and growing population is a dynamic world that can create and sustain progress. For a different angle on the same thesis, see “Forget About Overpopulation, Soon There Will Be Too Few Humans,” by Roots of Progress fellow Maarten Boudry.
When Galileo wanted to study the heavens through his telescope, he got money from those legendary patrons of the Renaissance, the Medici. To win their favor, when he discovered the moons of Jupiter, he named them the Medicean Stars. Other scientists and inventors offered flashy gifts, such as Cornelis Drebbel’s perpetuum mobile (a sort of astronomical clock) given to King James, who made Drebbel court engineer in return. The other way to do research in those days was to be independently wealthy: the Victorian model of the gentleman scientist. Meisterdrucke Eventually we decided that requiring researchers to seek wealthy patrons or have independent means was not the best way to do science. Today, researchers, in their role as “principal investigators” (PIs), apply to science funders for grants. In the US, the NIH spends nearly $48B annually, and the NSF over $11B, mainly to give such grants. Compared to the Renaissance, it is a rational, objective, democratic system. However, I have come to believe that this principal investigator model is deeply broken and needs to be replaced. That was the thought at the top of my mind coming out of a working group on “Accelerating Science” hosted by the Santa Fe Institute a few months ago. (The thoughts in this essay were inspired by many of the participants, but I take responsibility for any opinions expressed here. My thinking on this was also influenced by a talk given by James Phillips at a previous metascience conference. My own talk at the workshop was written up here earlier.) What should we do instead of the PI model? Funding should go in a single block to a relatively large research organization of, say, hundreds of scientists. This is how some of the most effective, transformative labs in the world have been organized, from Bell Labs to the MRC Laboratory of Molecular Biology. It has been referred to as the “block funding” model. Here’s why I think this model works: Specialization A principal investigator has to play multiple roles. They have to do science (researcher), recruit and manage grad students or research assistants (manager), maintain a lab budget (administrator), and write grants (fundraiser). These are different roles, and not everyone has the skill or inclination to do them all. The university model adds teaching, a fifth role. The block organization allows for specialization: researchers can focus on research, managers can manage, and one leader can fundraise for the whole org. This allows each person to do what they are best at and enjoy, and it frees researchers from spending 30–50% of their time writing grants, as is typical for PIs. I suspect it also creates more of an opportunity for leadership in research. Research leadership involves having a vision for an area to explore that will be highly fruitful—semiconductors, molecular biology, etc.—and then recruiting talent and resources to the cause. This seems more effective when done at the block level. Side note: the distinction I’m talking about here, between block funding and PI funding, doesn’t say anything about where the funding comes from or how those decisions are made. But today, researchers are often asked to serve on committees that evaluate grants. Making funding decisions is yet another role we add to researchers, and one that also deserves to be its own specialty (especially since having researchers evaluate their own competitors sets up an inherent conflict of interest). Research freedom and time horizons There’s nothing inherent to the PI grant model that dictates the size of the grant, the scope of activities it covers, the length of time it is for, or the degree of freedom it allows the researcher. But in practice, PI funding has evolved toward small grants for incremental work, with little freedom for the researcher to change their plans or strategy. I suspect the block funding model naturally lends itself to larger grants for longer time periods that are more at the vision level. When you’re funding a whole department, you’re funding a mission and placing trust in the leadership of the organization. Also, breakthroughs are unpredictable, but the more people you have working on things, the more regularly they will happen. A lab can justify itself more easily with regular achievements. In this way one person’s accomplishment provides cover to those who are still toiling away. Who evaluates researchers In the PI model, grant applications are evaluated by funding agencies: in effect, each researcher is evaluated by the external world. In the block model, a researcher is evaluated by their manager and their peers. James Phillips illustrates with a diagram: James Phillips A manager who knows the researcher well, who has been following their work closely, and who talks to them about it regularly, can simply make better judgments about who is doing good work and whose programs have potential. (And again, developing good judgment about researchers and their potential is a specialized role—see point 1). Further, when a researcher is evaluated impersonally by an external agency, they need to write up their work formally, which adds overhead to the process. They need to explain and justify their plans, which leads to more conservative proposals. They need to show outcomes regularly, which leads to more incremental work. And funding will disproportionately flow to people who are good at fundraising (which, again, deserves to be a specialized role). To get scientific breakthroughs, we want to allow talented, dedicated people to pursue hunches for long periods of time. This means we need to trust the process, long before we see the outcome. Several participants in the workshop echoed this theme of trust. Trust like that is much stronger when based on a working relationship, rather than simply on a grant proposal. If the block model is a superior alternative, how do we move towards it? I don’t have a blueprint. I doubt that existing labs will transform themselves into this model. But funders could signal their interest in funding labs like this, and new labs could be created or proposed on this model and seek such funding. I think the first step is spreading this idea.
In December, I went to the Foresight Institute’s Vision Weekend 2023 in San Francisco. I had a lot of fun talking to a bunch of weird and ambitious geeks about the glorious abundant technological future. Here are few things I learned about (with the caveat that this is mostly based on informal conversations with only basic fact-checking, not deep research): Cellular reprogramming Aging doesn’t only happen to your body: it happens at the level of individual cells. Over time, cells accumulate waste products and undergo epigenetic changes that are markers of aging. But wait—when a baby is born, it has young cells, even though it grew out of cells that were originally from its older parents. That is, the egg and sperm cells might be 20, 30, or 40 years old, but somehow when they turn into a baby, they get reset to biological age zero. This process is called “reprogramming,” and it happens soon after fertilization. It turns out that cell reprogramming can be induced by certain proteins, known as the Yamanaka factors, after their discoverer (who won a Nobel for this in 2012). Could we use those proteins to reprogram our own cells, making them youthful again? Maybe. There is a catch: the Yamanaka factors not only clear waste out of cells, they also reset them to become stem cells. You do not want to turn every cell in your body into a stem cell. You don’t even want to turn a small number of them into stem cells: it can give you cancer (which kind of defeats the purpose of a longevity technology). But there is good news: when you expose cells to the Yamanaka factors, the waste cleanup happens first, and the stem cell transformation happens later. If we can carefully time the exposure, maybe we can get the target effect without the damaging side effects. This is tricky: different tissues respond on different timelines, so you can’t apply the treatment uniformly over the body. There are a lot of details to be worked out here. But it’s an intriguing line of research for longevity, and it’s one of the avenues being explored at Retro Bio, among other places. Here’s a Derek Lowe article with more info and references. The BFG orbital launch system If we’re ever going to have a space economy, it has to be a lot cheaper to launch things into space. Space Shuttle launches cost over $65,000/kg, and even the Falcon Heavy costs $1500/kg. Compare to shipping costs on Earth, which are only a few dollars per kilogram. A big part of the high launch cost in traditional systems is the rocket, which is discarded with each launch. SpaceX is bringing costs down by making reusable rockets that land gently rather than crashing into the ocean, and by making very big rockets for economies of scale (Elon Musk has speculated that Starship could bring costs as low as $10/kg, although this is a ways off, since right now fuel costs alone are close to that amount). But what if we didn’t need a rocket at all? Rockets are pretty much our only option for propulsion in space, but what if we could give most of the impulse to the payload on Earth? J. Storrs Hall has proposed the “space pier,” a runway 300 km long mounted atop towers 100 km tall. The payload takes an elevator 100 km up to the top of the tower, thus exiting the atmosphere and much of Earth’s gravity well. Then a linear induction motor accelerates it into orbit along the 300 km track. You could do this with a mere 10 Gs of acceleration, which is survivable by human passengers. Think of it like a Big Friendly Giant (BFG) picking up your payload and then throwing it into orbit. Hall estimates that this could bring launch costs down to $10/kg, if the pier could be built for a mere $10 billion. The only tiny little catch with the space pier is that there is no technology in existence that could build it, and no construction material that a 100 km tower could be made of. Hall suggests that with “mature nanotechnology” we could build the towers out of diamond. OK. So, probably not going to happen this decade. What can we do now, with today’s technology? Let’s drop the idea of using this for human passengers and just consider relatively durable freight. Now we can use much higher G-forces, which means we don’t need anything close to 300 km of distance to accelerate over. And, does it really have to be 100 km tall? Yes, it’s nice to start with an altitude advantage, and with no atmosphere, but both of those problems can be overcome with sufficient initial velocity. At this point we’re basically just talking about an enormous cannon (a very different kind of BFG). This is what Longshot Space is doing. Build a big long tube in the desert. Put the payload in it, seal the end with a thin membrane, and pump the air out to create a vacuum. Then rapidly release some compressed gasses behind the payload, which bursts through the membrane and exits the tube at Mach 25. One challenge with this is that a gas can only expand as fast as the speed of sound in that gas. In air this is, of course, a lot less than Mach 25. One thing that helps is to use a lighter gas, in which the speed of sound is higher, such as helium or (for the very brave) hydrogen. Another part of the solution is to give the payload a long, wedge-shaped tail. The expanding gasses push sideways on this tail, which through the magic of simple machines translates into a much faster push forwards. There’s a brief discussion and illustration of the pneumatics in this video. Now, if you are trying to envision “big long tube in the desert”, you might be wondering: is the tube angled upwards or something? No. It is basically lying flat on the ground. It is expensive to build a long straight thing that points up: you have to dig a deep hole and/or build a tall tower. What about putting it on the side of a mountain, which naturally points up? Building things on mountains is also hard; in addition, mountains are special and nobody wants to give you one. It’s much easier to haul lots of materials into the middle of the desert; also there is lots of room out there and the real estate is cheap. Next you might be wondering: if the tube is horizontal, isn’t it pointed in the wrong direction to get to space? I thought space was up? Well, yes. There are a few things going on here. One is that if you travel far enough in a straight line, the Earth will curve away from you and you will eventually find yourself in space. Another is that if you shape the projectile such that its center of pressure is in the right place relative to its center of mass, then it will naturally angle upward when it hits the atmosphere. Lastly, if you are trying to get into orbit, most of the velocity you need is actually horizontal anyway. In fact, if and when you reach a circular orbit, you will find that all of your velocity is horizontal. This means that there is no way to get into orbit purely ballistically, with a single impulse imparted from Earth. Any satellite, for instance, launched via this system will need its own rocket propulsion in order to circularize the orbit once it reaches altitude (even leaving aside continual orbital adjustments during its service lifetime). But we’re now talking about a relatively small rocket with a small amount of fuel, not the big multi-stage things that you need to blast off from the surface. And presumably someday we will be delivering food, fuel, tools, etc. to space in packages that just need to be caught by whoever is receiving them. Longshot estimates that this system, like Starship or the space pier, could get launch costs down to about $10/kg. This might be cheap enough that launch prices could be zero, subsidized by contracts to buy fuel or maintenance, in a space-age version of “give away the razor and sell the blades.” Not only would this business model help grow the space economy, it would also prove wrong all the economists who have been telling us for decades that “there’s no such thing as a free launch.” Mars could be terraformed in our lifetimes Terraforming a planet sounds like a geological process, and so I had sort of thought that it would require geological timescales, or if it could really be accelerated, at least a matter of centuries or so. You drop off some algae or something on a rocky planet, and then your distant descendants return one day to find a verdant paradise. So I was surprised to learn that major changes on Mars could, in principle, be made on a schedule much shorter than a single human lifespan. Let’s back up. Mars is a real fixer-upper of a planet. Its temperature varies widely, averaging about −60º C; its atmosphere is thin and mostly carbon dioxide. This severely depresses its real estate values. Suppose we wanted to start by significantly warming the planet. How do you do that? Let’s assume Mars’s orbit cannot be changed—I mean, we’re going to get in enough trouble with the Sierra Club as it is—so the total flux of solar energy reaching the planet is constant. What we can do is to trap a bit more of that energy on the planet, and prevent it from radiating out into space. In other words, we need to enhance Mars’s greenhouse effect. And the way to do that is to give it a greenhouse gas. Wait, we just said that Mars’s atmosphere is mostly CO2, which is a notorious greenhouse gas, so why isn’t Mars warm already? It’s just not enough: the atmosphere is very thin (less than 1% of the pressure of Earth’s atmosphere), and what CO2 there is only provides about 5º of warming. We’re going to need to add more GHG. What could it be? Well, for starters, given the volumes required, it should be composed of elements that already exist on Mars. With the ingredients we have, what can we make? Could we get more CO2 in the atmosphere? There is more CO2 on/under the surface, in frozen form, but even that is not enough for the task. We need something else. What about CFCs? As a greenhouse gas, they are about four orders of magnitude more efficient than CO2, so we’d need a lot less of them. However, they require fluorine, which is very rare in the Martian soil, and we’d still need about 100 gigatons of it. This is not encouraging. One thing Mars does have a good amount of is metal, such as iron, aluminum, and magnesium. Now metals, you might be thinking, are not generally known as greenhouse gases. But small particles of conductive metal, with the right size and shape, can act as one. A recent paper found through simulation that “nanorods” about 9 microns long, half the wavelength of the infrared thermal radiation given off by a planet, would scatter that radiation back to the surface (Ansari, Kite, Ramirez, Steele, and Mohseni, “Warming Mars with artificial aerosol appears to be feasible”—no preprint online, but this poster seems to represent earlier work). Suppose we aim to warm the planet by about 30º C, enough to melt surface water in the polar regions during the summer, and bring Mars much closer to Earth temperatures. AKRSM’s simulation says that we would need to put about 400 mg/m3 of nanorods into the Martian sky, an efficiency (in warming per unit mass) more than 2000x greater than previously proposed methods. The particles would settle out of the atmosphere slowly, at less than 1/100 the rate of natural Mars dust, so only about 30 liters/sec of them would need to be released continuously. If we used iron, this would require mining a million cubic meters of iron per year—quite a lot, but less than 1% of what we do on Earth. And the particles, like other Martian dust, would be lifted high in the atmosphere by updrafts, so they could be conveniently released from close to the surface. Wouldn’t metal nanoparticles be potentially hazardous to breathe? Yes, but this is already a problem from Mars’s naturally dusty atmosphere, and the nanorods wouldn’t make it significantly worse. (However, this will have to be solved somehow if we’re going to make Mars habitable.) Kite told me that if we started now, given the capabilities of Starship, we could achieve the warming in a mere twenty years. Most of that time is just getting equipment to Mars, mining the iron, manufacturing the nanorods, and then waiting about a year for Martian winds to mix them throughout the atmosphere. Since Mars has no oceans to provide thermal inertia, the actual warming after that point only takes about a month. Kite is interested in talking to people about the design of a the nanorod factory. He wants to get a size/weight/power estimate and an outline design for the factory, to make an initial estimate of how many Starship landings would be needed. Contact him at edwin.kite@gmail.com. I have not yet gotten Kite and Longshot together to figure out if we can shoot the equipment directly to Mars using one really enormous space cannon. Thanks to Reason, Mike Grace, and Edwin Kite for conversations and for commenting on a draft of this essay. Any errors or omissions above are entirely my own.
More in science
[Note that this article is a transcript of the video embedded above.] “The big black stacks of the Illium Works of the Federal Apparatus Corporation spewed acid fumes and soot over the hundreds of men and women who were lined up before the red-brick employment office.” That’s the first line of one of my favorite short stories, written by Kurt Vonnegut in 1955. It paints a picture of a dystopian future that, thankfully, didn’t really come to be, in part because of those stacks. In some ways, air pollution is kind of a part of life. I’d love to live in a world where the systems, materials and processes that make my life possible didn’t come with any emissions, but it’s just not the case... From the time that humans discovered fire, we’ve been methodically calculating the benefits of warmth, comfort, and cooking against the disadvantages of carbon monoxide exposure and particulate matter less than 2.5 microns in diameter… Maybe not in that exact framework, but basically, since the dawn of humanity, we’ve had to deal with smoke one way or another. Since, we can’t accomplish much without putting unwanted stuff into the air, the next best thing is to manage how and where it happens to try and minimize its impact on public health. Of course, any time you have a balancing act between technical issues, the engineers get involved, not so much to help decide where to draw the line, but to develop systems that can stay below it. And that’s where the smokestack comes in. Its function probably seems obvious; you might have a chimney in your house that does a similar job. But I want to give you a peek behind the curtain into the Illium Works of the Federal Apparatus Corporation of today and show you what goes into engineering one of these stacks at a large industrial facility. I’m Grady, and this is Practical Engineering. We put a lot of bad stuff in the air, and in a lot of different ways. There are roughly 200 regulated hazardous air pollutants in the United States, many with names I can barely pronounce. In many cases, the industries that would release these contaminants are required to deal with them at the source. A wide range of control technologies are put into place to clean dangerous pollutants from the air before it’s released into the environment. One example is coal-fired power plants. Coal, in particular, releases a plethora of pollutants when combusted, so, in many countries, modern plants are required to install control systems. Catalytic reactors remove nitrous oxides. Electrostatic precipitators collect particulates. Scrubbers use lime (the mineral, not the fruit) to strip away sulfur dioxide. And I could go on. In some cases, emission control systems can represent a significant proportion of the costs involved in building and operating a plant. But these primary emission controls aren’t always feasible for every pollutant, at least not for 100 percent removal. There’s a very old saying that “the solution to pollution is dilution.” It’s not really true on a global scale. Case in point: There’s no way to dilute the concentration of carbon dioxide in the atmosphere, or rather, it’s already as dilute as it’s going to get. But, it can be true on a local scale. Many pollutants that affect human health and the environment are short-lived; they chemically react or decompose in the atmosphere over time instead of accumulating indefinitely. And, for a lot of chemicals, there are concentration thresholds below which the consequences on human health are negligible. In those cases, dilution, or really dispersion, is a sound strategy to reduce their negative impacts, and so, in some cases, that’s what we do, particularly at major point sources like factories and power plants. One of the tricks to dispersion is that many plumes are naturally buoyant. Naturally, I’m going to use my pizza oven to demonstrate this. Not all, but most pollutants we care about are a result of combustion; burning stuff up. So the plume is usually hot. We know hot air is less dense, so it naturally rises. And the hotter it is, the faster that happens. You can see when I first start the fire, there’s not much air movement. But as the fire gets hotter in the oven, the plume speeds up, ultimately rising higher into the air. That’s the whole goal: get the plume high above populated areas where the pollutants can be dispersed to a minimally-harmful concentration. It sounds like a simple solution - just run our boilers and furnaces super hot to get enough buoyancy for the combustion products to disperse. The problem with the solution is that the whole reason we combust things is usually to recover the heat. So if you’re sending a lot of that heat out of the system, just because it makes the plume disperse better, you’re losing thermodynamic efficiency. It’s wasteful. That’s where the stack comes in. Let me put mine on and show you what I mean. I took some readings with the anemometers with the stack on and off. The airspeed with the stack on was around double with it off. About a meter per second compared with two. But it’s a little tougher to understand why. It’s intuitive that as you move higher in a column of fluid, the pressure goes down (since there’s less weight of the fluid above). The deeper you dive in a pool, the more pressure you feel. The higher you fly in a plane or climb a mountain, the lower the pressure. The slope of that line is proportional to a fluid’s density. You don’t feel much of a pressure difference climbing a set of stairs because air isn’t very dense. If you travel the same distance in water, you’ll definitely notice the difference. So let’s look at two columns of fluid. One is the ambient air and the other is the air inside a stack. Since it’s hotter, the air inside the stack is less dense. Both columns start at the same pressure at the bottom, but the higher you go, the more the pressure diverges. It’s kind of like deep sea diving in reverse. In water, the deeper you go into the dense water, the greater the pressure you feel. In a stack, the higher you are in a column of hot air, the more buoyant you feel compared to the outside air. This is the genius of a smoke stack. It creates this difference in pressure between the inside and outside that drives greater airflow for a given temperature. Here’s the basic equation for a stack effect. I like to look at equations like this divided into what we can control and what we can’t. We don’t get to adjust the atmospheric pressure, the outside temperature, and this is just a constant. But you can see, with a stack, an engineer now has two knobs to turn: the temperature of the gas inside and the height of the stack. I did my best to keep the temperature constant in my pizza oven and took some airspeed readings. First with no stack. Then with the stock stack. Then with a megastack. By the way, this melted my anemometer; should have seen that coming. Thankfully, I got the measurements before it melted. My megastack nearly doubled the airspeed again at around three-and-a-half meters per second versus the two with just the stack that came with the oven. There’s something really satisfying about this stack effect to me. No moving parts or fancy machinery. Just put a longer pipe and you’ve fundamentally changed the physics of the whole situation. And it’s a really important tool in the environmental engineer’s toolbox to increase airflow upward, allowing contaminants to flow higher into the atmosphere where they can disperse. But this is not particularly revolutionary… unless you’re talking about the Industrial Revolution. When you look at all the pictures of the factories in the 19th century, those stacks weren’t there to improve air quality, if you can believe it. The increased airflow generated by a stack just created more efficient combustion for the boilers and furnaces. Any benefits to air quality in the cities were secondary. With the advent of diesel and electric motors, we could use forced drafts, reducing the need for a tall stack to increase airflow. That was kind of the decline of the forests of industrial chimneys that marked the landscape in the 19th century. But they’re obviously not all gone, because that secondary benefit of air quality turned into the primary benefit as environmental rules about air pollution became stricter. Of course, there are some practical limits that aren’t taken into account by that equation I showed. The plume cools down as it moves up the stack to the outside, so its density isn’t constant all the way up. I let my fire die down a bit so it wouldn’t melt the thermometer (learned my lesson), and then took readings inside the oven and at the top of the stack. You can see my pizza oven flue gas is around 210 degrees at the top of the mega-stack, but it’s roughly 250 inside the oven. After the success of the mega stack on my pizza oven, I tried the super-mega stack with not much improvement in airflow: about 4 meters per second. The warm air just got too cool by the time it reached the top. And I suspect that frictional drag in the longer pipe also contributed to that as well. So, really, depending on how insulating your stack is, our graph of height versus pressure actually ends up looking like this. And this can be its own engineering challenge. Maybe you’ve gotten back drafts in your fireplace at home because the fire wasn’t big or hot enough to create that large difference in pressure. You can see there are a lot of factors at play in designing these structures, but so far, all we’ve done is get the air moving faster. But that’s not the end goal. The purpose is to reduce the concentration of pollutants that we’re exposed to. So engineers also have to consider what happens to the plume once it leaves the stack, and that’s where things really get complicated. In the US, we have National Ambient Air Quality Standards that regulate six so-called “criteria” pollutants that are relatively widespread: carbon monoxide, lead, nitrogen dioxide, ozone, particulates, and sulfur dioxide. We have hard limits on all these compounds with the intention that they are met at all times, in all locations, under all conditions. Unfortunately, that’s not always the case. You can go on EPA’s website and look at the so-called “non-attainment” areas for the various pollutants. But we do strive to meet the standards through a list of measures that is too long to go into here. And that is not an easy thing to do. Not every source of pollution comes out of a big stationary smokestack where it’s easy to measure and control. Cars, buses, planes, trucks, trains, and even rockets create lots of contaminants that vary by location, season, and time of day. And there are natural processes that contribute as well. Forests and soil microbes release volatile organic compounds that can lead to ozone formation. Volcanic eruptions and wildfires release carbon monoxide and sulfur dioxide. Even dust storms put particulates in the air that can travel across continents. And hopefully you’re seeing the challenge of designing a smoke stack. The primary controls like scrubbers and precipitators get most of the pollutants out, and hopefully all of the ones that can’t be dispersed. But what’s left over and released has to avoid pushing concentrations above the standards. That design has to work within the very complicated and varying context of air chemistry and atmospheric conditions that a designer has no control over. Let me show you a demo. I have a little fog generator set up in my garage with a small fan simulating the wind. This isn’t a great example because the airflow from the fan is pretty turbulent compared to natural winds. You occasionally get some fog at the surface, but you can see my plume mainly stays above the surface, dispersing as it moves with the wind. But watch what happens when I put a building downstream. The structure changes the airflow, creating a downwash effect and pulling my plume with it. Much more frequently you see the fog at the ground level downstream. And this is just a tiny example of how complex the behavior of these plumes can be. Luckily, there’s a whole field of engineering to characterize it. There are really just two major transport processes for air pollution. Advection describes how contaminants are carried along by the wind. Diffusion describes how those contaminants spread out through turbulence. Gravity also affects air pollution, but it doesn’t have a significant effect except on heavier-than-air particulates. With some math and simplifications of those two processes, you can do a reasonable job predicting the concentration of any pollutant at any point in space as it moves and disperses through the air. Here’s the basic equation for that, and if you’ll join me for the next 2 hours, we’ll derive this and learn the meaning of each term… Actually, it might take longer than that, so let’s just look at a graphic. You can see that as the plume gets carried along by the wind, it spreads out in what’s basically a bell curve, or gaussian distribution, in the planes perpendicular to the wind direction. But even that is a bit too simplified to make any good decisions with, especially when the consequences of getting it wrong are to public health. A big reason for that is atmospheric stability. And this can make things even more complicated, but I want to explain the basics, because the effect on plumes of gas can be really dramatic. You probably know that air expands as it moves upward; there’s less pressure as you go up because there is less air above you. And as any gas expands, it cools down. So there’s this relationship between height and temperature we call the adiabatic lapse rate. It’s about 10 degrees Celsius for every kilometer up or about 28 Fahrenheit for every mile up. But the actual atmosphere doesn’t always follow this relationship. For example, rising air parcels can cool more slowly than the surrounding air. This makes them warmer and less dense, so they keep rising, promoting vertical motion in a positive feedback loop called atmospheric instability. You can even get a temperature inversion where you have cooler air below warmer air, something that can happen in the early morning when the ground is cold. And as the environmental lapse rate varies from the adiabatic lapse rate, the plumes from stacks change. In stable conditions, you usually get a coning plume, similar to what our gaussian distribution from before predicts. In unstable conditions, you get a lot of mixing, which leads to a looping plume. And things really get weird for temperature inversions because they basically act like lids for vertical movement. You can get a fanning plume that rises to a point, but then only spreads horizontally. You can also get a trapping plume, where the air gets stuck between two inversions. You can have a lofting plume, where the air is above the inversion with stable conditions below and unstable conditions above. And worst of all, you can have a fumigating plume when there are unstable conditions below an inversion, trapping and mixing the plume toward the ground surface. And if you pay attention to smokestacks, fires, and other types of emissions, you can identify these different types of plumes pretty easily. Hopefully you’re seeing now how much goes into this. Engineers have to keep track of the advection and diffusion, wind speed and direction, atmospheric stability, the effects of terrain and buildings on all those factors, plus the pre-existing concentrations of all the criteria pollutants from other sources, which vary in time and place. All that to demonstrate that your new source of air pollution is not going to push the concentrations at any place, at any time, under any conditions, beyond what the standards allow. That’s a tall order, even for someone who loves gaussian distributions. And often the answer to that tall order is an even taller smokestack. But to make sure, we use software. The EPA has developed models that can take all these factors into account to simulate, essentially, what would happen if you put a new source of pollution into the world and at what height. So why are smokestacks so tall? I hope you’ll agree with me that it turns out to be a pretty complicated question. And it’s important, right? These stacks are expensive to build and maintain. Those costs trickle down to us through the costs of the products and services we buy. They have a generally negative visual impact on the landscape. And they have a lot of other engineering challenges too, like resonance in the wind. And on the other hand, we have public health, arguably one of the most critical design criteria that can exist for an engineer. It’s really important to get this right. I think our air quality regulations do a lot to make sure we strike a good balance here. There are even rules limiting how much credit you can get for building a stack higher for greater dispersion to make sure that we’re not using excessively tall stacks in lieu of more effective, but often more expensive, emission controls and strategies. In a perfect world, none of the materials or industrial processes that we rely on would generate concentrated plumes of hazardous gases. We don’t live in that perfect world, but we are pretty fortunate that, at least in many places on Earth, air quality is something we don’t have to think too much about. And to thank for it, we have a relatively small industry of environmental professionals who do think about it, a whole lot. You know, for a lot of people, this is their whole career; what they ponder from 9-5 every day. Something most of us would rather keep out of mind, they face it head-on, developing engineering theories, professional consensus, sensible regulations, modeling software, and more - just so we can breathe easy.
In the movie Blade Runner 2049 (an excellent film I highly recommend), Ryan Gosling’s character, K, has an AI “wife”, Joi, played by Ana de Armas. K is clearly in love with Joi, who is nothing but software and holograms. In one poignant scene, K is viewing a giant ad for AI companions and sees […] The post AI Therapists first appeared on NeuroLogica Blog.
Why are buildings today austere, while buildings of the past were ornate and elaborately ornamented?
A conversation about EHRs, who their customers actually are, and building apps
The lush forests that have long sustained Cambodia’s Indigenous people have steadily fallen to illicit logging. Now, community members face intimidation and risk arrest as they patrol their forests to document the losses and try to push the government to stop the cutting. Read more on E360 →