Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
73
If a technology may introduce catastrophic risks, how do you develop it? It occurred to me that the Wright Brothers’ approach to inventing the airplane might make a good case study. The catastrophic risk for them, of course, was dying in a crash. This is exactly what happened to one of the Wrights’ predecessors, Otto Lilienthal, who attempted to fly using a kind of glider. He had many successful experiments, but one day he lost control, fell, and broke his neck. Wikimedia / Library of Congress Believe it or not, the news of Lilienthal’s death motivated the Wrights to take up the challenge of flying. Someone had to carry on the work! But they weren’t reckless. They wanted to avoid Lilienthal’s fate. So what was their approach? First, they decided that the key problem to be solved was one of control. Before they even put a motor in a flying machine, they experimented for years with gliders, trying to solve the control problem. As Wilbur Wright wrote in a...
over a year ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from The Roots of Progress | Articles

What is progress?

In one sense, the concept of progress is simple, straightforward, and uncontroversial. In another sense, it contains an entire worldview. The most basic meaning of “progress” is simply advancement along a path, or more generally from one state to another that is considered more advanced by some standard. (In this sense, progress can be good, neutral, or even bad—e.g., the progress of a disease.) The question is always: advancement along what path, in what direction, by what standard? Types of progress “Scientific progress,” “technological progress,” and “economic progress” are relatively straightforward. They are hard to measure, they are multi-dimensional, and we might argue about specific examples—but in general, scientific progress consists of more knowledge, better theories and explanations, a deeper understanding of the universe; technological progress consists of more inventions that work better (more powerfully or reliably or efficiently) and enable us to do more things; economic progress consists of more production, infrastructure, and wealth. Together, we can call these “material progress”: improvements in our ability to comprehend and to command the material world. Combined with more intangible advances in the level of social organization—institutions, corporations, bureaucracy—these constitute “progress in capabilities”: that is, our ability to do whatever it is we decide on. True progress But this form of progress is not an end in itself. True progress is advancement toward the good, toward ultimate values—call this “ultimate progress,” or “progress in outcomes.” Defining this depends on axiology; that is, on our theory of value. To a humanist, ultimate progress means progress in human well-being: “human progress.” Not everyone agrees on what constitutes well-being, but it certainly includes health, happiness, and life satisfaction. In my opinion, human well-being is not purely material, and not purely hedonic: it also includes “spiritual” values such as knowledge, beauty, love, adventure, and purpose. The humanist also sees other kinds of progress contributing to human well-being: “moral progress,” such as the decline of violence, the elimination of slavery, and the spread of equal rights for all races and sexes; and more broadly “social progress,” such as the evolution from monarchy to representative democracy, or the spread of education and especially literacy. Others have different standards. Biologist David Graber called himself a “biocentrist,” by which he meant … those of us who value wildness for its own sake, not for what value it confers upon mankind. … We are not interested in the utility of a particular species, or free-flowing river, or ecosystem, to mankind. They have intrinsic value, more value—to me—than another human body, or a billion of them. … Human happiness, and certainly human fecundity, are not as important as a wild and healthy planet. By this standard, virtually all human activity is antithetical to progress: Graber called humans “a cancer… a plague upon ourselves and upon the Earth.” Or for another example, one Lutheran stated that his “primary measure of the goodness of a society is the population share which is a baptized Christian and regularly attending church.” The idea of progress isn’t completely incompatible with some flavors of environmentalism or of religion (and there are both Christians and environmentalists in the progress movement!) but these examples show that it is possible to focus on a non-human standard, such as God or Nature, to the point where human health and happiness become irrelevant or even diametrically opposed to “progress.” Unqualified progress What are we talking about when we refer to “progress” unqualified, as in “the progress of mankind” or “the roots of progress”? “Progress” in this sense is the concept of material progress, social progress, and human progress as a unified whole. It is based on the premise that progress in capabilities really does on the whole lead to progress in outcomes. This doesn’t mean that all aspects of progress move in lockstep—they don’t. It means that all aspects of progress support each other and over the long term depend on each other; they are intertwined and ultimately inseparable. Consider, for instance, how Patrick Collison and Tyler Cowen defined the term in their article calling for “progress studies”: By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. David Deutsch, in The Beginning of Infinity, is even more explicit, saying that progress includes “improvements not only in scientific understanding, but also in technology, political institutions, moral values, art, and every aspect of human welfare.” Skepticism of this idea of progress is sometimes expressed as: “progress towards what?” The undertone of this question is: “in your focus on material progress, you have lost sight of social and/or human progress.” On the premise that different forms of progress are diverging and even coming into opposition, this is an urgent challenge; on the premise that progress a unified whole, it is a valuable intellectual question but not a major dilemma. Historical progress “Progress” is also an interpretation of history according to which all these forms of progress have, by and large, been happening. In this sense, the study of “progress” is the intersection of axiology and history: given a standard of value, are things getting better? In Steven Pinker’s book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, the bulk of the chapters are devoted to documenting this history. Many of the charts in that book were sourced from Our World in Data, which also emphasizes the historical reality of progress. So-called “progress” Not everyone agrees with this concept of progress. It depends on an Enlightement worldview that includes confidence in reason and science, and a humanist morality. One argument against the idea of progress claims that material progress has not actually led to human well-being. Perhaps the benefits of progress are outweighed by the costs and risks: health hazards, technological unemployment, environmental damage, existential threats, etc. Some downplay or deny the benefits themselves, arguing that material progress doesn’t increase happiness (owing to the hedonic treadmill), that it doesn’t satisfy our spiritual values, or that it degrades our moral character. Rousseau famously asserted that “the progress of the sciences and the arts has added nothing to our true happiness” and that “our souls have become corrupted to the extent that our sciences and our arts have advanced towards perfection.” Others, as mentioned above, argue for a different standard of value altogether, such as nature or God. (Often these arguments contain some equivocation between whether these things are good in themselves, or whether we should value them because they are good for human well-being over the long term.) When people start to conclude that progress is not in fact good, they talk about this as no longer “believing in progress.” Historian Carl Becker, writing in the shadow of World War I, said that “the fact of progress is disputed and the doctrine discredited,” and asked: “May we still, in whatever different fashion, believe in the progress of mankind?” In 1991, Christopher Lasch asked: How does it happen that serious people continue to believe in progress, in the face of massive evidence that might have been expected to refute the idea of progress once and for all? Those who dispute the idea of progress often avoid the term, or quarantine it in scare quotes: so-called “progress.” When Jeremy Caradonna questioned the concept in The Atlantic, the headline was: “Is ‘Progress’ Good for Humanity?” One of the first court rulings on environmental protection law, in 1971, said that such law represented “the commitment of the Government to control, at long last, the destructive engine of material ‘progress.’” Or consider this from Guns, Germs, and Steel: … I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in human happiness. The idea of progress is inherently an idea that progress, overall, is good. If “progress” is destructive, if it does not in fact improve human well-being, then it hardly deserves the name. Contrast this with the concept of growth. “Growth,” writ large, refers to an increase in the population, the economy, and the scale of human organization and activity. It is not inherently good: everyone agrees that it is happening, but some are against it; some even define themselves by being against it (the “degrowth” movement). No one is against progress, they are only against “progress”: that is, they either believe in it, or deny it. The most important question in the philosophy of progress, then, is whether the idea of progress is valid—whether “progress” is real. “Progress” in the 19th century Before the World Wars, there was an idea of progress that went even beyond what I have defined above, and which contained at least two major errors. One error was the idea that progress is inevitable. Becker, in the essay quoted above, said that according to “the doctrine of progress,” the Idea or the Dialectic or Natural Law, functioning through the conscious purposes or the unconscious activities of men, could be counted on to safeguard mankind against future hazards. … At the present moment the world seems indeed out of joint, and it is difficult to believe with any conviction that a power not ourselves … will ever set it right. (Emphasis added.) The other was the idea that moral progress was so closely connected to material progress that they would always move together. Condorcet believed that prosperity would “naturally dispose men to humanity, to benevolence and to justice,” and that “nature has connected, by a chain which cannot be broken, truth, happiness, and virtue.” The 20th century, with the outbreak of world war and the rise of totalitarianism, proved these ideas disastrously wrong. “Progress” in the 21st century and beyond To move forward, we need a wiser, more mature idea of progress. Progress is not automatic or inevitable. It depends on choice and effort. It is up to us. Progress is not automatically good. It must be steered. Progress always creates new problems, and they don’t get solved automatically. Solving them requires active focus and effort, and this is a part of progress, too. Material progress does not automatically lead to moral progress. Technology within an evil social system can do more harm than good. We must commit to improving morality and society along with science, technology, and industry. With these lessons well learned, we can rescue the idea of progress and carry it forward into the 21st century and beyond.

a year ago 118 votes
Why you, personally, should want a larger human population

What is the ideal size of the human population? One common answer is “much smaller.” Paul Ehrlich, co-author of The Population Bomb (1968), has as recently as 2018 promoted the idea that “the world’s optimum population is less than two billion people,” a reduction of the current population by about 75%. And Ehrlich is a piker compared to Jane Goodall, who said that many of our problems would go away “if there was the size of population that there was 500 years ago”—that is, around 500 million people, a reduction of over 90%. This is a static ideal of a “sustainable” population. Regular readers of this blog can cite many objections to this view. Resources are not static. Historically, as we run out of a resource (whale oil, elephant tusks, seabird guano), we transition to a new technology based on a more abundant resource—and there are basically no major examples of catastrophic resource shortages in the industrial age. The carrying capacity of the planet is not fixed, but a function of technology; and side effects such as pollution or climate change are just more problems to be solved. As long as we can keep coming up with new ideas, growth can continue. But those are only reasons why a larger population is not a problem. Is there a positive reason to want a larger population? I’m going to argue yes—that the ideal human population is not “much smaller,” but “ever larger.” Selfish reasons to want more humans Let me get one thing out of the way up front. One argument for a larger population is based on utilitarianism, specifically the version of it that says that what is good is the sum total of happiness across all humans. If each additional life adds to the cosmic scoreboard of goodness, then it’s obviously better to have more people (unless they are so miserable that their lives are literally not worth living). I’m not going to argue from this premise, in part because I don’t need to and more importantly because I don’t buy it myself. (Among other things, it leads to paradoxes such as the idea that a population of thriving, extremely happy people is not as good as a sufficiently-larger population of people who are just barely happy.) Instead, I’m going to argue that a larger population is better for every individual—that there are selfish reasons to want more humans. First I’ll give some examples of how this is true, and then I’ll draw out some of the deeper reasons for it. More geniuses First, more people means more outliers—more super-intelligent, super-creative, or super-talented people, to produce great art, architecture, music, philosophy, science, and inventions. If genius is defined as one-in-a-million level intelligence, then every billion people means another thousand geniuses—to work on all of the problems and opportunities of humanity, to the benefit of all. More progress A larger population means faster scientific, technical, and economic progress, for several reasons: Total investment. More people means more total R&D: more researchers, and more surplus wealth to invest in it. Specialization. In the economy generally, the division of labor increases productivity, as each worker can specialize and become expert at their craft (“Smithian growth”). In R&D, each researcher can specialize in their field. Larger markets support more R&D investment, which lets companies pick off higher-hanging fruit. I’ve given the example of the threshing machine: it was difficult enough to manufacture that it didn’t pay for a local artisan to make them only for their town, but it was profitable to serve a regional market. Alex Tabarrok gives the example of the market for cancer drugs expanding as large countries such as India and China become wealthier. Very high production-value entertainment, such as movies, TV, and games, are possible only because they have mass audiences. More ambitious projects need a certain critical mass of resources behind them. Ancient Egyptian civilization built a large irrigation system to make the best use of the Nile floodwaters for agriculture, a feat that would not have been possible to a small tribe or chiefdom. The Apollo Program, at its peak in the 1960s, took over 4% of the US federal budget, but 4% would not have been enough if the population and the economy were half the size. If someday humanity takes on a grand project such as a space elevator or a Dyson sphere, it will require an enormous team and an enormous wealth surplus to fund them. In fact, these factors may represent not only opportunities but requirements for progress. There is evidence that simply to maintain a constant rate of exponential economic growth requires exponentially growing investment in R&D. This investment is partly financial capital, but also partly human capital—that is, we need an exponentially growing base of researchers. One way to understand this is that if each researcher can push forward a constant “surface area” of the frontier, then as the frontier expands, a larger number of researchers is needed to keep pushing all of it forward. Two hundred years ago, a small number of scientists were enough to investigate electrical and magnetic phenomena; today, millions of scientists and engineers are productively employed working out all of the details and implications of those phenomena, both in the lab and in the electrical, electronics, and computer hardware and software industries. But it’s not even clear that each researcher can push forward a constant surface area of the frontier. As that frontier moves further out, the “burden of knowledge” grows: each researcher now has to study and learn more in order to even get to the frontier. Doing so might force them to specialize even further. Newton could make major contributions to fields as diverse as gravitation and optics, because the very basics of those fields were still being figured out; today, a researcher might devote their whole career to a sub-sub-discipline such as nuclear astrophysics. But in the long run, an exponentially growing base of researchers is impossible without an exponentially growing population. In fact, in some models of economic growth, the long-run growth rate in per-capita GDP is directly proportional to the growth rate of the population. More options Even setting aside growth and progress—looking at a static snapshot of a society—a world with more people is a world with more choices, among greater variety: Better matching for aesthetics, style, and taste. A bigger society has more cuisines, more architectural styles, more types of fashion, more sub-genres of entertainment. This also improves as the world gets more connected: for instance, the wide variety of ethnic restaurants in every major city is a recent phenomenon; it was only decades ago that pizza, to Americans, was an unfamiliar foreign cuisine. Better matching to careers. A bigger economy has more options for what to do with your life. In a hunter-gatherer society, you are lucky if you get to decide whether to be a hunter or a gatherer. In an agricultural economy, you’re probably going to be a farmer, or maybe some sort of artisan. Today there’s a much wider set of choices, from pilot to spreadsheet jockey to lab technician. Better matching to other people. A bigger world gives you a greater chance to find the perfect partner for you: the best co-founder for your business, the best lyricist for your songs, the best partner in marriage. More niche communities. Whatever your quirky interest, worldview, or aesthetic—the more people you can be in touch with, the more likely you are to find others like you. Even if you’re one in a million, in a city of ten million people, there are enough of you for a small club. In a world of eight billion, there are enough of you for a thriving subreddit. More niche markets. Similarly, in a larger, more connected economy, there are more people to economically support your quirky interests. Your favorite Etsy or Patreon creator can find the “one thousand true fans” they need to make a living. Deeper patterns When I look at the above, here are some of the underlying reasons: The existence of non-rival goods. Rival goods need to be divided up; more people just create more competition for them. But non-rival goods can be shared by all. A larger population and economy, all else being equal, will produce more non-rival goods, which benefits everyone. Economies of scale. In particular, often total costs are a combination of fixed and variable costs. The more output, the more the fixed costs can be amortized, lowering average cost. Network effects and Metcalfe’s law. Value in a network is generated not by nodes but by connections, and the more nodes there are total, the more connections are possible per node. Metcalfe’s law quantifies this: the number of possible connections in a network is proportional to the square of the number of nodes. All of these create agglomeration effects: bigger societies are better for everyone. A dynamic world I assume that when Ehrlich and Goodall advocate for much smaller populations, they aren’t literally calling for genocide or hoping for a global catastrophe (although Ehrlich is happy with coercive fertility control programs, and other anti-humanists have expressed hope for “the right virus to come along”). Even so, the world they advocate is a greatly impoverished and stagnant one: a world with fewer discoveries, fewer inventions, fewer works of creative genius, fewer cures for fewer diseases, fewer choices, fewer soulmates. A world with a large and growing population is a dynamic world that can create and sustain progress. For a different angle on the same thesis, see “Forget About Overpopulation, Soon There Will Be Too Few Humans,” by Roots of Progress fellow Maarten Boudry.

a year ago 56 votes
Event, Feb 29: “Towards a New Philosophy of Progress” in Boston and on Zoom

On Thursday, February 29, I’ll be giving my talk “Towards a New Philosophy of Progress” to the New England Legal Foundation, for their Economic Liberty Speaker Series. The talk will be held over breakfast at NELF’s offices in Boston, and will also be livestreamed over Zoom. See details and register here. This is a talk I have given before in other venues. The description: Enlightenment thinkers were tremendously optimistic about the potential for human progress: not only in science and technology, but also in morality and society. This belief lasted through the 19th century—but in the 20th century, after the World Wars, it gave way to fear, skepticism, and distrust. Now, in the 21st century, we need a new way forward: a new philosophy of progress. What events and ideas challenged the concept of progress? How can we restore it on a sound foundation? And how can we establish a bold, ambitious vision for the future?

a year ago 91 votes
Making every researcher seek grants is a broken model

When Galileo wanted to study the heavens through his telescope, he got money from those legendary patrons of the Renaissance, the Medici. To win their favor, when he discovered the moons of Jupiter, he named them the Medicean Stars. Other scientists and inventors offered flashy gifts, such as Cornelis Drebbel’s perpetuum mobile (a sort of astronomical clock) given to King James, who made Drebbel court engineer in return. The other way to do research in those days was to be independently wealthy: the Victorian model of the gentleman scientist. Meisterdrucke Eventually we decided that requiring researchers to seek wealthy patrons or have independent means was not the best way to do science. Today, researchers, in their role as “principal investigators” (PIs), apply to science funders for grants. In the US, the NIH spends nearly $48B annually, and the NSF over $11B, mainly to give such grants. Compared to the Renaissance, it is a rational, objective, democratic system. However, I have come to believe that this principal investigator model is deeply broken and needs to be replaced. That was the thought at the top of my mind coming out of a working group on “Accelerating Science” hosted by the Santa Fe Institute a few months ago. (The thoughts in this essay were inspired by many of the participants, but I take responsibility for any opinions expressed here. My thinking on this was also influenced by a talk given by James Phillips at a previous metascience conference. My own talk at the workshop was written up here earlier.) What should we do instead of the PI model? Funding should go in a single block to a relatively large research organization of, say, hundreds of scientists. This is how some of the most effective, transformative labs in the world have been organized, from Bell Labs to the MRC Laboratory of Molecular Biology. It has been referred to as the “block funding” model. Here’s why I think this model works: Specialization A principal investigator has to play multiple roles. They have to do science (researcher), recruit and manage grad students or research assistants (manager), maintain a lab budget (administrator), and write grants (fundraiser). These are different roles, and not everyone has the skill or inclination to do them all. The university model adds teaching, a fifth role. The block organization allows for specialization: researchers can focus on research, managers can manage, and one leader can fundraise for the whole org. This allows each person to do what they are best at and enjoy, and it frees researchers from spending 30–50% of their time writing grants, as is typical for PIs. I suspect it also creates more of an opportunity for leadership in research. Research leadership involves having a vision for an area to explore that will be highly fruitful—semiconductors, molecular biology, etc.—and then recruiting talent and resources to the cause. This seems more effective when done at the block level. Side note: the distinction I’m talking about here, between block funding and PI funding, doesn’t say anything about where the funding comes from or how those decisions are made. But today, researchers are often asked to serve on committees that evaluate grants. Making funding decisions is yet another role we add to researchers, and one that also deserves to be its own specialty (especially since having researchers evaluate their own competitors sets up an inherent conflict of interest). Research freedom and time horizons There’s nothing inherent to the PI grant model that dictates the size of the grant, the scope of activities it covers, the length of time it is for, or the degree of freedom it allows the researcher. But in practice, PI funding has evolved toward small grants for incremental work, with little freedom for the researcher to change their plans or strategy. I suspect the block funding model naturally lends itself to larger grants for longer time periods that are more at the vision level. When you’re funding a whole department, you’re funding a mission and placing trust in the leadership of the organization. Also, breakthroughs are unpredictable, but the more people you have working on things, the more regularly they will happen. A lab can justify itself more easily with regular achievements. In this way one person’s accomplishment provides cover to those who are still toiling away. Who evaluates researchers In the PI model, grant applications are evaluated by funding agencies: in effect, each researcher is evaluated by the external world. In the block model, a researcher is evaluated by their manager and their peers. James Phillips illustrates with a diagram: James Phillips A manager who knows the researcher well, who has been following their work closely, and who talks to them about it regularly, can simply make better judgments about who is doing good work and whose programs have potential. (And again, developing good judgment about researchers and their potential is a specialized role—see point 1). Further, when a researcher is evaluated impersonally by an external agency, they need to write up their work formally, which adds overhead to the process. They need to explain and justify their plans, which leads to more conservative proposals. They need to show outcomes regularly, which leads to more incremental work. And funding will disproportionately flow to people who are good at fundraising (which, again, deserves to be a specialized role). To get scientific breakthroughs, we want to allow talented, dedicated people to pursue hunches for long periods of time. This means we need to trust the process, long before we see the outcome. Several participants in the workshop echoed this theme of trust. Trust like that is much stronger when based on a working relationship, rather than simply on a grant proposal. If the block model is a superior alternative, how do we move towards it? I don’t have a blueprint. I doubt that existing labs will transform themselves into this model. But funders could signal their interest in funding labs like this, and new labs could be created or proposed on this model and seek such funding. I think the first step is spreading this idea.

a year ago 66 votes
Cellular reprogramming, pneumatic launch systems, and terraforming Mars

In December, I went to the Foresight Institute’s Vision Weekend 2023 in San Francisco. I had a lot of fun talking to a bunch of weird and ambitious geeks about the glorious abundant technological future. Here are few things I learned about (with the caveat that this is mostly based on informal conversations with only basic fact-checking, not deep research): Cellular reprogramming Aging doesn’t only happen to your body: it happens at the level of individual cells. Over time, cells accumulate waste products and undergo epigenetic changes that are markers of aging. But wait—when a baby is born, it has young cells, even though it grew out of cells that were originally from its older parents. That is, the egg and sperm cells might be 20, 30, or 40 years old, but somehow when they turn into a baby, they get reset to biological age zero. This process is called “reprogramming,” and it happens soon after fertilization. It turns out that cell reprogramming can be induced by certain proteins, known as the Yamanaka factors, after their discoverer (who won a Nobel for this in 2012). Could we use those proteins to reprogram our own cells, making them youthful again? Maybe. There is a catch: the Yamanaka factors not only clear waste out of cells, they also reset them to become stem cells. You do not want to turn every cell in your body into a stem cell. You don’t even want to turn a small number of them into stem cells: it can give you cancer (which kind of defeats the purpose of a longevity technology). But there is good news: when you expose cells to the Yamanaka factors, the waste cleanup happens first, and the stem cell transformation happens later. If we can carefully time the exposure, maybe we can get the target effect without the damaging side effects. This is tricky: different tissues respond on different timelines, so you can’t apply the treatment uniformly over the body. There are a lot of details to be worked out here. But it’s an intriguing line of research for longevity, and it’s one of the avenues being explored at Retro Bio, among other places. Here’s a Derek Lowe article with more info and references. The BFG orbital launch system If we’re ever going to have a space economy, it has to be a lot cheaper to launch things into space. Space Shuttle launches cost over $65,000/kg, and even the Falcon Heavy costs $1500/kg. Compare to shipping costs on Earth, which are only a few dollars per kilogram. A big part of the high launch cost in traditional systems is the rocket, which is discarded with each launch. SpaceX is bringing costs down by making reusable rockets that land gently rather than crashing into the ocean, and by making very big rockets for economies of scale (Elon Musk has speculated that Starship could bring costs as low as $10/kg, although this is a ways off, since right now fuel costs alone are close to that amount). But what if we didn’t need a rocket at all? Rockets are pretty much our only option for propulsion in space, but what if we could give most of the impulse to the payload on Earth? J. Storrs Hall has proposed the “space pier,” a runway 300 km long mounted atop towers 100 km tall. The payload takes an elevator 100 km up to the top of the tower, thus exiting the atmosphere and much of Earth’s gravity well. Then a linear induction motor accelerates it into orbit along the 300 km track. You could do this with a mere 10 Gs of acceleration, which is survivable by human passengers. Think of it like a Big Friendly Giant (BFG) picking up your payload and then throwing it into orbit. Hall estimates that this could bring launch costs down to $10/kg, if the pier could be built for a mere $10 billion. The only tiny little catch with the space pier is that there is no technology in existence that could build it, and no construction material that a 100 km tower could be made of. Hall suggests that with “mature nanotechnology” we could build the towers out of diamond. OK. So, probably not going to happen this decade. What can we do now, with today’s technology? Let’s drop the idea of using this for human passengers and just consider relatively durable freight. Now we can use much higher G-forces, which means we don’t need anything close to 300 km of distance to accelerate over. And, does it really have to be 100 km tall? Yes, it’s nice to start with an altitude advantage, and with no atmosphere, but both of those problems can be overcome with sufficient initial velocity. At this point we’re basically just talking about an enormous cannon (a very different kind of BFG). This is what Longshot Space is doing. Build a big long tube in the desert. Put the payload in it, seal the end with a thin membrane, and pump the air out to create a vacuum. Then rapidly release some compressed gasses behind the payload, which bursts through the membrane and exits the tube at Mach 25. One challenge with this is that a gas can only expand as fast as the speed of sound in that gas. In air this is, of course, a lot less than Mach 25. One thing that helps is to use a lighter gas, in which the speed of sound is higher, such as helium or (for the very brave) hydrogen. Another part of the solution is to give the payload a long, wedge-shaped tail. The expanding gasses push sideways on this tail, which through the magic of simple machines translates into a much faster push forwards. There’s a brief discussion and illustration of the pneumatics in this video. Now, if you are trying to envision “big long tube in the desert”, you might be wondering: is the tube angled upwards or something? No. It is basically lying flat on the ground. It is expensive to build a long straight thing that points up: you have to dig a deep hole and/or build a tall tower. What about putting it on the side of a mountain, which naturally points up? Building things on mountains is also hard; in addition, mountains are special and nobody wants to give you one. It’s much easier to haul lots of materials into the middle of the desert; also there is lots of room out there and the real estate is cheap. Next you might be wondering: if the tube is horizontal, isn’t it pointed in the wrong direction to get to space? I thought space was up? Well, yes. There are a few things going on here. One is that if you travel far enough in a straight line, the Earth will curve away from you and you will eventually find yourself in space. Another is that if you shape the projectile such that its center of pressure is in the right place relative to its center of mass, then it will naturally angle upward when it hits the atmosphere. Lastly, if you are trying to get into orbit, most of the velocity you need is actually horizontal anyway. In fact, if and when you reach a circular orbit, you will find that all of your velocity is horizontal. This means that there is no way to get into orbit purely ballistically, with a single impulse imparted from Earth. Any satellite, for instance, launched via this system will need its own rocket propulsion in order to circularize the orbit once it reaches altitude (even leaving aside continual orbital adjustments during its service lifetime). But we’re now talking about a relatively small rocket with a small amount of fuel, not the big multi-stage things that you need to blast off from the surface. And presumably someday we will be delivering food, fuel, tools, etc. to space in packages that just need to be caught by whoever is receiving them. Longshot estimates that this system, like Starship or the space pier, could get launch costs down to about $10/kg. This might be cheap enough that launch prices could be zero, subsidized by contracts to buy fuel or maintenance, in a space-age version of “give away the razor and sell the blades.” Not only would this business model help grow the space economy, it would also prove wrong all the economists who have been telling us for decades that “there’s no such thing as a free launch.” Mars could be terraformed in our lifetimes Terraforming a planet sounds like a geological process, and so I had sort of thought that it would require geological timescales, or if it could really be accelerated, at least a matter of centuries or so. You drop off some algae or something on a rocky planet, and then your distant descendants return one day to find a verdant paradise. So I was surprised to learn that major changes on Mars could, in principle, be made on a schedule much shorter than a single human lifespan. Let’s back up. Mars is a real fixer-upper of a planet. Its temperature varies widely, averaging about −60º C; its atmosphere is thin and mostly carbon dioxide. This severely depresses its real estate values. Suppose we wanted to start by significantly warming the planet. How do you do that? Let’s assume Mars’s orbit cannot be changed—I mean, we’re going to get in enough trouble with the Sierra Club as it is—so the total flux of solar energy reaching the planet is constant. What we can do is to trap a bit more of that energy on the planet, and prevent it from radiating out into space. In other words, we need to enhance Mars’s greenhouse effect. And the way to do that is to give it a greenhouse gas. Wait, we just said that Mars’s atmosphere is mostly CO2, which is a notorious greenhouse gas, so why isn’t Mars warm already? It’s just not enough: the atmosphere is very thin (less than 1% of the pressure of Earth’s atmosphere), and what CO2 there is only provides about 5º of warming. We’re going to need to add more GHG. What could it be? Well, for starters, given the volumes required, it should be composed of elements that already exist on Mars. With the ingredients we have, what can we make? Could we get more CO2 in the atmosphere? There is more CO2 on/under the surface, in frozen form, but even that is not enough for the task. We need something else. What about CFCs? As a greenhouse gas, they are about four orders of magnitude more efficient than CO2, so we’d need a lot less of them. However, they require fluorine, which is very rare in the Martian soil, and we’d still need about 100 gigatons of it. This is not encouraging. One thing Mars does have a good amount of is metal, such as iron, aluminum, and magnesium. Now metals, you might be thinking, are not generally known as greenhouse gases. But small particles of conductive metal, with the right size and shape, can act as one. A recent paper found through simulation that “nanorods” about 9 microns long, half the wavelength of the infrared thermal radiation given off by a planet, would scatter that radiation back to the surface (Ansari, Kite, Ramirez, Steele, and Mohseni, “Warming Mars with artificial aerosol appears to be feasible”—no preprint online, but this poster seems to represent earlier work). Suppose we aim to warm the planet by about 30º C, enough to melt surface water in the polar regions during the summer, and bring Mars much closer to Earth temperatures. AKRSM’s simulation says that we would need to put about 400 mg/m3 of nanorods into the Martian sky, an efficiency (in warming per unit mass) more than 2000x greater than previously proposed methods. The particles would settle out of the atmosphere slowly, at less than 1/100 the rate of natural Mars dust, so only about 30 liters/sec of them would need to be released continuously. If we used iron, this would require mining a million cubic meters of iron per year—quite a lot, but less than 1% of what we do on Earth. And the particles, like other Martian dust, would be lifted high in the atmosphere by updrafts, so they could be conveniently released from close to the surface. Wouldn’t metal nanoparticles be potentially hazardous to breathe? Yes, but this is already a problem from Mars’s naturally dusty atmosphere, and the nanorods wouldn’t make it significantly worse. (However, this will have to be solved somehow if we’re going to make Mars habitable.) Kite told me that if we started now, given the capabilities of Starship, we could achieve the warming in a mere twenty years. Most of that time is just getting equipment to Mars, mining the iron, manufacturing the nanorods, and then waiting about a year for Martian winds to mix them throughout the atmosphere. Since Mars has no oceans to provide thermal inertia, the actual warming after that point only takes about a month. Kite is interested in talking to people about the design of a the nanorod factory. He wants to get a size/weight/power estimate and an outline design for the factory, to make an initial estimate of how many Starship landings would be needed. Contact him at edwin.kite@gmail.com. I have not yet gotten Kite and Longshot together to figure out if we can shoot the equipment directly to Mars using one really enormous space cannon. Thanks to Reason, Mike Grace, and Edwin Kite for conversations and for commenting on a draft of this essay. Any errors or omissions above are entirely my own.

a year ago 30 votes

More in science

The Hidden Engineering of Liquid Dampers in Skyscrapers

[Note that this article is a transcript of the video embedded above.] There’s a new trend in high-rise building design. Maybe you’ve seen this in your city. The best lots are all taken, so developers are stretching the limits to make use of space that isn’t always ideal for skyscrapers. They’re not necessarily taller than buildings of the past, but they are a lot more slender. “Pencil tower” is the term generally used to describe buildings that have a slenderness ratio of more than around 10 to 1, height to width. A lot of popular discussion around skyscrapers is about how tall we can build them. Eventually, you can get so tall that there are no materials strong enough to support the weight. But, pencil towers are the perfect case study in why strength isn’t the only design criterion used in structural engineering. Of course, we don’t want our buildings to fall down, but there’s other stuff we don’t want them to do, too, including flex and sway in the wind. In engineering, this concept is called the serviceability limit state, and it’s an entirely separate consideration from strength. Even if moderate loads don’t cause a structure to fail, the movement they cause can lead to windows breaking, tiles cracking, accelerated fatigue of the structure, and, of course, people on the top floors losing their lunch from disorientation and discomfort. So, limiting wind-induced motions is a major part of high-rise design and, in fact, can be such a driving factor in the engineering of the building that strength is a secondary consideration. Making a building stiffer is the obvious solution. But adding stiffness requires larger columns and beams, and those subtract valuable space within the building itself. Another option is to augment a building’s aerodynamic performance, reducing the loads that winds impose. But that too can compromise the expensive floorspace within. So many engineers are relying on another creative way to limit the vibrations of tall buildings. And of course, I built a model in the garage to show you how this works. I’m Grady, and this is Practical Engineering. One of the very first topics I ever covered on this channel was tuned mass dampers. These are mechanisms that use a large, solid mass to counteract motion in all kinds of structures, dissipating the energy through friction or hydraulics, like the shock absorbers in vehicles. Probably the most famous of these is in the Taipei 101 building. At the top of the tower is a massive steel pendulum, and instead of hiding it away in a mechanical floor, they opened it to visitors, even giving the damper its own mascot. But, mass dampers have a major limitation because of those mechanical parts. The complex springs, dampers, and bearings need regular maintenance, and they are custom-built. That gets pretty expensive. So, what if we could simplify the device? This is my garage-built high-rise. It’s not going to hold many conference room meetings, but it does do a good job swaying from side to side, just like an actual skyscraper. And I built a little tank to go on top here. The technical name for this tank is a tuned liquid column damper, and I can show you how it works. Let’s try it with no water first. Using my digitally calibrated finger, I push the tower over by a prescribed distance, and you can see this would not be a very fun ride. There is some natural damping, but the oscillation goes on for quite a while before the motion stops. Now, let’s put some water in the tank. With the power of movie magic, I can put these side by side so you can really get a sense of the difference. By the way, nearly all of the parts for this demonstration were provided by my friends at Send-Cut-Send. I don’t have a milling machine or laser cutter, so this is a really nice option for getting customized parts made from basically any material - aluminum, steel, acrylic - that are ready to assemble. Instead of complex mechanical devices, liquid column dampers dissipate energy through the movement of water. The liquid in the tank is both the mass and the damper. This works like a pendulum where the fluid oscillates between two columns. Normally, there’s an orifice between the two columns that creates the damping through friction loss as water flows from one side to the other. To make this demo a little simpler, I just put lids on the columns with small holes. I actually bought a fancy air valve to make this adjustable, but it didn’t allow quite enough airflow. So instead, I simplified with a piece of tape. Very technical. Energy transferred to the water through the building is dissipated by the friction of the air as it moves in and out of the columns. And you can even hear this as it happens. Any supplemental damping system starts with a design criterion. This varies around the world, but in the US, this is probability-based. We generally require that peak accelerations with a 1-in-10 chance of being exceeded in a given year be limited to 15-18 milli-gs in residential buildings and 20-25 milli-gs in offices. For reference, the lateral acceleration for highway curve design is usually capped at 100 milli-gs, so the design criteria for buildings is between a fourth and a sixth of that. I think that makes intuitive sense. You don’t want to feel like you’re navigating a highway curve while you sit at your desk at work. It’s helpful to think of these systems in a simplified way. This is the most basic representation: a spring, a damper, and mass on a cart. We know the mass of the building. We can estimate its stiffness. And the building itself has some intrinsic damping, but usually not much. If we add the damping system onto the cart, it’s basically just the same thing at a smaller scale, and the design process is really just choosing the mass and damping systems for the remaining pieces of this puzzle to achieve the design goal. The mass of liquid dampers is usually somewhere between half a percent to two percent of the building’s total weight. The damping is related to the water’s ability to dissipate energy. And the spring needs to be tuned to the building. All buildings vibrate at a natural frequency related to their height and stiffness. Think of it like a big tuning fork full of offices or condos. I can estimate my model’s natural frequency by timing the number of oscillations in a given time interval. It’s about 1.3 hertz or cycles per second. In an ideal tuned damper, the oscillation of the damping system matches that of the building. So tuning the frequency of the damper is an important piece of the puzzle. For a tuned liquid column damper, the tuning mostly comes from the length of the liquid flow path. A longer path results in a lower frequency. The compression of the air above the column in my demo affects this too, and some types of dampers actually take advantage of that phenomenon. I got the best tuning when the liquid level was about halfway up the columns. The orifice has less of an effect on frequency and is used mostly to balance the amount of damping versus the volume of liquid that flows through each cycle. In my model, with one of the holes completely closed off, you can see the water doesn’t move, and you get minimal damping. With the tape mostly covering the hole, you get the most frictional loss, but not all the fluid flows from one side to the other each cycle. When I covered about half of one hole, I got the full fluid flow and the best damping performance. The benefit of a tuned column damper is that it doesn’t take up a lot of space. And because the fluid movement is confined, they’re fairly predictable in behavior. So, these are used in quite a few skyscrapers, including the Random House Tower in Manhattan, One Wall Center in Vancouver (which actually has many walls), and Comcast Center in Philadelphia. But, tuned column liquid dampers have a few downsides. One is that they really only work for flexible structures, like my demo. Just like in a pendulum, the longer the flow path in a column damper, the lower the frequency of the oscillation. For stiffer buildings with higher natural frequencies, tuning requires a very short liquid column, which limits the mass and damping capability to a point where you don’t get much benefit. The other thing is that this is still kind of a complex device with intricate shapes and a custom orifice between the two columns. So, we can get even simpler. This is my model tuned sloshing damper, and it’s about as simple as a damper can get. I put a weight inside the empty tank to make a fair comparison, and we can put it side by side with water in the tank to see how it works. As you can see, sloshing dampers dissipate energy by… sloshing. Again, the water is both the mass and the damper. If you tune it just right, the sloshing happens perfectly out of phase of the motion of the building, reducing the magnitude of the movement and acceleration. And you can see why this might be a little cheaper to build - it’s basically just a swimming pool - four concrete walls, a floor, and some water. There’s just not that much to it. But the simplicity of construction hides the complexity of design. Like a column damper, the frequency of a sloshing damper can be tuned, first by the length of the tank. Just like fretting a guitar string further down the neck makes the note lower, a tank works the same way. As the tank gets longer, its sloshing frequency goes down. That makes sense - it takes longer for the wave to get from one side to the other. But you can also adjust the depth. Waves move slower in shallower water and faster in deeper water. Watch what happens when I overfill the tank. The initial wave starts on the left as the building goes right. It reaches the right side just as the building starts moving left. That’s what we want; it’s counteracting the motion. But then it makes it back to the left before the building starts moving right. It’s actually kind of amplifying the motion, like pushing a kid on a swing. Pretty soon after that, the wave and the building start moving in phase, so there’s pretty much no damping at all. Compare it to the more properly tuned example where most of the wave motion is counteracting the building motion as it sways back and forth. You can see in my demo that a lot of the energy dissipation comes from the breaking waves as they crash against the sides of the tank. That is a pretty complicated phenomenon to predict, and it’s highly dependent on how big the waves are. And even with the level pretty well tuned to the frequency of the building, you can see there’s a lot of complexity in the motion with multiple modes of waves, and not all of them acting against the motion of the building. So, instead of relying on breaking waves, most sloshing dampers use flow obstructions like screens, columns, or baffles. I got a few different options cut out of acrylic so we can try this out. These baffles add drag, increasing the energy dissipation with the water, usually without changing the sloshing frequency. Here’s a side-by-side comparison of the performance without a baffle and with one. You can see that the improvement is pretty dramatic. The motion is more controlled and the behavior is more linear, making this much simpler to predict during the design phase. It’s kind of the best of both worlds since you get damping from the sloshing and the drag of the water passing through the screen. Almost all the motion is stopped in this demo after only three oscillations. I was pretty impressed with this. Here’s all three of the baffle runs side by side. Actually, the one with the smallest holes worked the best in my demo, but deciding the configuration of these baffles is a big challenge in the engineering of these systems because you can’t really just test out a bunch of options at full scale. Devices like this are in service in quite a few high-rise buildings, including Princess Tower in Dubai, and the Museum Tower in Dallas. With no moving parts and very little maintenance except occasionally topping it off to keep the water at the correct level, you can see how it would be easy to choose a sloshing damper for a new high-rise project. But there are some disadvantages. One is volumetric efficiency. You can see that not all the water in the tank is mobilized, especially for smaller movements, which means not all the water is contributing to the damping. The other is non-linearity. The amount of damping changes depending on the magnitude of the movement since drag is related to velocity squared. And even the frequency of the damper isn’t constant; it can change with the wave amplitude as well because of the breaking waves. So you might get good performance at the design level, but not so much for slower winds. Dampers aren’t just used in buildings. Bridges also take advantage of these clever devices, especially on the decks of pedestrian bridges and the towers of long-span bridges. This also happens at a grand scale between the Earth and moon. Tidal bulges in the oceans created by the moon’s tug on Earth dissipate energy through friction and turbulence, which is a big part of why our planet’s rotation is slowing over time. Days used to be a lot shorter when the Earth was young, but we have a planet-scale liquid damper constantly dissipating our rotational energy. But whether it’s bridges or buildings, these dampers usually don’t work perfectly right at the start. Vibrations are complicated. They’re very hard to predict, even with modern tools like simulation software and scale physical models. So, all dampers have to go through a commissioning process. Usually this involves installing accelerometers once construction is nearing completion to measure the structure’s actual natural frequency. The tuning of tuned dampers doesn’t just happen during the design phase; you want some adjustability after construction to make sure they match the structure’s natural frequency exactly so you get the most damping possible. For liquid dampers, that means adjusting the levels in the tanks. And in many cases, buildings might use multiple dampers tuned to slightly different frequencies to improve the performance over a range of conditions. Even in these two basic categories, there is a huge amount of variability and a lot of ongoing research to minimize the tradeoffs these systems come with. The truth is that, relatively speaking, there aren’t that many of these systems in use around the world. Each one is highly customized, and even putting them into categories can get a little tricky. There are even actively controlled liquid dampers. My tuning for the column damper works best for a single magnitude of motion, but you can see that once the swaying gets smaller, the damper isn’t doing a lot to curb it. You can imagine if I constantly adjusted the size of the orifice, I could get better performance over a broader range of unwanted motion. You can do this electronically by having sensors feed into a control system that adjusts a valve position in real-time. Active systems and just the flexibility to tune a damper in general also help deal with changes over time. If a building’s use changes, if new skyscrapers nearby change the wind conditions, or if it gets retrofits that change its natural frequency, the damping system can easily accommodate those changes. In the end, a lot of engineering decisions come down to economics. In most cases, damping is less about safety and more about comfort, which is often harder to pin down. Engineers and building owners face a balancing act between the cost of supplemental damping and the value of the space those systems take up. Tuned mass dampers are kind of household names when it comes to damping. A few buildings like Shanghai Center and Taipei 101 have made them famous. They’re usually the most space-efficient (since steel and concrete are more dense than water). But they’re often more costly to install and maintain. Liquid dampers are the unsung heroes. They take up more space, but they’re simple and cost-effective, especially if the fire codes already require you to have a big tank of water at the top of your building anyway. Maybe someday, an architect will build one out of glass or acrylic, add some blue dye and mica powder, and put it on display as a public showcase. Until then, we’ll just have to know it’s there by feel.

3 hours ago 1 votes
London Inches Closer to Running Transit System Entirely on Renewable Power

Under a new agreement, London will source enough solar power to run its light railway and tram networks entirely on renewable energy. Read more on E360 →

11 hours ago 1 votes
Science slow down - not a simple question

I participated in a program about 15 years ago that looked at science and technology challenges faced by a subset of the US government. I came away thinking that such problems fall into three broad categories. Actual science and engineering challenges, which require foundational research and creativity to solve. Technology that may be fervently desired but is incompatible with the laws of nature, economic reality, or both.  Alleged science and engineering problems that are really human/sociology issues. Part of science and engineering education and training is giving people the skills to recognize which problems belong to which categories.  Confusing these can strongly shape the perception of whether science and engineering research is making progress.  There has been a lot of discussion in the last few years about whether scientific progress (however that is measured) has slowed down or stagnated.  For example, see here: https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/  https://news.uchicago.edu/scientific-progress-slowing-james-evans https://www.forbes.com/sites/roberthart/2023/01/04/where-are-all-the-scientific-breakthroughs-forget-ai-nuclear-fusion-and-mrna-vaccines-advances-in-science-and-tech-have-slowed-major-study-says/ https://theweek.com/science/world-losing-scientific-innovation-research A lot of the recent talk is prompted by this 2023 study, which argues that despite the world having many more researchers than ever before (behold population growth) and more global investment in research, somehow "disruptive" innovations are coming less often, or are fewer and farther between these days.  (Whether this is an accurate assessment is not a simple matter to resolve; more on this below.) There is a whole tech bro culture that buys into this, however.  For example, see this interview from last week in the New York Times with Peter Thiel, which points out that Thiel has been complaining about this for a decade and a half.   On some level, I get it emotionally.  The unbounded future spun in a lot of science fiction seems very far away.  Where is my flying car?  Where is my jet pack?  Where is my moon base?  Where are my fusion power plants, my antigravity machine, my tractor beams, my faster-than-light drive?  Why does the world today somehow not seem that different than the world of 1985, while the world of 1985 seems very different than that of 1945? Some of the folks that buy into this think that science is deeply broken somehow - that we've screwed something up, because we are not getting the future they think we were "promised".  Some of these people have this as an internal justification underpinning the dismantling of the NSF, the NIH, basically a huge swath of the research ecosystem in the US.  These same people would likely say that I am part of the problem, and that I can't be objective about this because the whole research ecosystem as it currently exists is a groupthink self-reinforcing spiral of mediocrity.   Science and engineering are inherently human ventures, and I think a lot of these concerns have an emotional component.  My take at the moment is this: Genuinely transformational breakthroughs are rare.  They often require a combination of novel insights, previously unavailable technological capabilities, and luck.  They don't come on a schedule.   There is no hard and fast rule that guarantees continuous exponential technological progress.  Indeed, in real life, exponential growth regimes never last. The 19th and 20th centuries were special.   If we think of research as a quest for understanding, it's inherently hierarchal.  Civilizational collapses aside, you can only discover how electricity works once.   You can only discover the germ theory of disease, the nature of the immune system, and vaccination once (though in the US we appear to be trying really hard to test that by forgetting everything).  You can only discover quantum mechanics once, and doing so doesn't imply that there will be an ongoing (infinite?) chain of discoveries of similar magnitude. People are bad at accurately perceiving rare events and their consequences, just like people have a serious problem evaluating risk or telling the difference between correlation and causation.  We can't always recognize breakthroughs when they happen.  Sure, I don't have a flying car.  I do have a device in my pocket that weighs only a few ounces, gives me near-instantaneous access to the sum total of human knowledge, let's me video call people around the world, can monitor aspects of my fitness, and makes it possible for me to watch sweet videos about dogs.  The argument that we don't have transformative, enormously disruptive breakthroughs as often as we used to or as often as we "should" is in my view based quite a bit on perception. Personally, I think we still have a lot more to learn about the natural world.  AI tools will undoubtedly be helpful in making progress in many areas, but I think it is definitely premature to argue that the vast majority of future advances will come from artificial superintelligences and thus we can go ahead and abandon the strategies that got us the remarkable achievements of the last few decades. I think some of the loudest complainers (Thiel, for example) about perceived slowing advancement are software people.  People who come from the software development world don't always appreciate that physical infrastructure and understanding are hard, and that there are not always clever or even brute-force ways to get to an end goal.  Solving foundational problems in molecular biology or quantum information hardware or  photonics or materials is not the same as software development.  (The tech folks generally know this on an intellectual level, but I don't think all of them really understand it in their guts.  That's why so many of them seem to ignore real world physical constraints when talking about AI.).  Trying to apply software development inspired approaches to science and engineering research isn't bad as a component of a many-pronged strategy, but alone it may not give the desired results - as warned in part by this piece in Science this week.   More frequent breakthroughs in our understanding and capabilities would be wonderful.  I don't think dynamiting the US research ecosystem is the way to get us there, and hoping that we can dismantle everything because AI will somehow herald a new golden age seems premature at best.

yesterday 2 votes
Researchers Uncover Hidden Ingredients Behind AI Creativity

Image generators are designed to mimic their training data, so where does their apparent creativity come from? A recent study suggests that it’s an inevitable by-product of their architecture. The post Researchers Uncover Hidden Ingredients Behind AI Creativity first appeared on Quanta Magazine

yesterday 2 votes
Animals Adapting to Cities

Humans are dramatically changing the environment of the Earth in many ways. Only about 23% of the land surface (excluding Antarctica) is considered to be “wilderness”, and this is rapidly decreasing. What wilderness is left is also mostly managed conservation areas. Meanwhile, about 3% of the surface is considered urban. I could not find a […] The post Animals Adapting to Cities first appeared on NeuroLogica Blog.

yesterday 2 votes