More from The Roots of Progress | Articles
In one sense, the concept of progress is simple, straightforward, and uncontroversial. In another sense, it contains an entire worldview. The most basic meaning of “progress” is simply advancement along a path, or more generally from one state to another that is considered more advanced by some standard. (In this sense, progress can be good, neutral, or even bad—e.g., the progress of a disease.) The question is always: advancement along what path, in what direction, by what standard? Types of progress “Scientific progress,” “technological progress,” and “economic progress” are relatively straightforward. They are hard to measure, they are multi-dimensional, and we might argue about specific examples—but in general, scientific progress consists of more knowledge, better theories and explanations, a deeper understanding of the universe; technological progress consists of more inventions that work better (more powerfully or reliably or efficiently) and enable us to do more things; economic progress consists of more production, infrastructure, and wealth. Together, we can call these “material progress”: improvements in our ability to comprehend and to command the material world. Combined with more intangible advances in the level of social organization—institutions, corporations, bureaucracy—these constitute “progress in capabilities”: that is, our ability to do whatever it is we decide on. True progress But this form of progress is not an end in itself. True progress is advancement toward the good, toward ultimate values—call this “ultimate progress,” or “progress in outcomes.” Defining this depends on axiology; that is, on our theory of value. To a humanist, ultimate progress means progress in human well-being: “human progress.” Not everyone agrees on what constitutes well-being, but it certainly includes health, happiness, and life satisfaction. In my opinion, human well-being is not purely material, and not purely hedonic: it also includes “spiritual” values such as knowledge, beauty, love, adventure, and purpose. The humanist also sees other kinds of progress contributing to human well-being: “moral progress,” such as the decline of violence, the elimination of slavery, and the spread of equal rights for all races and sexes; and more broadly “social progress,” such as the evolution from monarchy to representative democracy, or the spread of education and especially literacy. Others have different standards. Biologist David Graber called himself a “biocentrist,” by which he meant … those of us who value wildness for its own sake, not for what value it confers upon mankind. … We are not interested in the utility of a particular species, or free-flowing river, or ecosystem, to mankind. They have intrinsic value, more value—to me—than another human body, or a billion of them. … Human happiness, and certainly human fecundity, are not as important as a wild and healthy planet. By this standard, virtually all human activity is antithetical to progress: Graber called humans “a cancer… a plague upon ourselves and upon the Earth.” Or for another example, one Lutheran stated that his “primary measure of the goodness of a society is the population share which is a baptized Christian and regularly attending church.” The idea of progress isn’t completely incompatible with some flavors of environmentalism or of religion (and there are both Christians and environmentalists in the progress movement!) but these examples show that it is possible to focus on a non-human standard, such as God or Nature, to the point where human health and happiness become irrelevant or even diametrically opposed to “progress.” Unqualified progress What are we talking about when we refer to “progress” unqualified, as in “the progress of mankind” or “the roots of progress”? “Progress” in this sense is the concept of material progress, social progress, and human progress as a unified whole. It is based on the premise that progress in capabilities really does on the whole lead to progress in outcomes. This doesn’t mean that all aspects of progress move in lockstep—they don’t. It means that all aspects of progress support each other and over the long term depend on each other; they are intertwined and ultimately inseparable. Consider, for instance, how Patrick Collison and Tyler Cowen defined the term in their article calling for “progress studies”: By “progress,” we mean the combination of economic, technological, scientific, cultural, and organizational advancement that has transformed our lives and raised standards of living over the past couple of centuries. David Deutsch, in The Beginning of Infinity, is even more explicit, saying that progress includes “improvements not only in scientific understanding, but also in technology, political institutions, moral values, art, and every aspect of human welfare.” Skepticism of this idea of progress is sometimes expressed as: “progress towards what?” The undertone of this question is: “in your focus on material progress, you have lost sight of social and/or human progress.” On the premise that different forms of progress are diverging and even coming into opposition, this is an urgent challenge; on the premise that progress a unified whole, it is a valuable intellectual question but not a major dilemma. Historical progress “Progress” is also an interpretation of history according to which all these forms of progress have, by and large, been happening. In this sense, the study of “progress” is the intersection of axiology and history: given a standard of value, are things getting better? In Steven Pinker’s book Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, the bulk of the chapters are devoted to documenting this history. Many of the charts in that book were sourced from Our World in Data, which also emphasizes the historical reality of progress. So-called “progress” Not everyone agrees with this concept of progress. It depends on an Enlightement worldview that includes confidence in reason and science, and a humanist morality. One argument against the idea of progress claims that material progress has not actually led to human well-being. Perhaps the benefits of progress are outweighed by the costs and risks: health hazards, technological unemployment, environmental damage, existential threats, etc. Some downplay or deny the benefits themselves, arguing that material progress doesn’t increase happiness (owing to the hedonic treadmill), that it doesn’t satisfy our spiritual values, or that it degrades our moral character. Rousseau famously asserted that “the progress of the sciences and the arts has added nothing to our true happiness” and that “our souls have become corrupted to the extent that our sciences and our arts have advanced towards perfection.” Others, as mentioned above, argue for a different standard of value altogether, such as nature or God. (Often these arguments contain some equivocation between whether these things are good in themselves, or whether we should value them because they are good for human well-being over the long term.) When people start to conclude that progress is not in fact good, they talk about this as no longer “believing in progress.” Historian Carl Becker, writing in the shadow of World War I, said that “the fact of progress is disputed and the doctrine discredited,” and asked: “May we still, in whatever different fashion, believe in the progress of mankind?” In 1991, Christopher Lasch asked: How does it happen that serious people continue to believe in progress, in the face of massive evidence that might have been expected to refute the idea of progress once and for all? Those who dispute the idea of progress often avoid the term, or quarantine it in scare quotes: so-called “progress.” When Jeremy Caradonna questioned the concept in The Atlantic, the headline was: “Is ‘Progress’ Good for Humanity?” One of the first court rulings on environmental protection law, in 1971, said that such law represented “the commitment of the Government to control, at long last, the destructive engine of material ‘progress.’” Or consider this from Guns, Germs, and Steel: … I do not assume that industrialized states are “better” than hunter-gatherer tribes, or that the abandonment of the hunter-gatherer lifestyle for iron-based statehood represents “progress,” or that it has led to an increase in human happiness. The idea of progress is inherently an idea that progress, overall, is good. If “progress” is destructive, if it does not in fact improve human well-being, then it hardly deserves the name. Contrast this with the concept of growth. “Growth,” writ large, refers to an increase in the population, the economy, and the scale of human organization and activity. It is not inherently good: everyone agrees that it is happening, but some are against it; some even define themselves by being against it (the “degrowth” movement). No one is against progress, they are only against “progress”: that is, they either believe in it, or deny it. The most important question in the philosophy of progress, then, is whether the idea of progress is valid—whether “progress” is real. “Progress” in the 19th century Before the World Wars, there was an idea of progress that went even beyond what I have defined above, and which contained at least two major errors. One error was the idea that progress is inevitable. Becker, in the essay quoted above, said that according to “the doctrine of progress,” the Idea or the Dialectic or Natural Law, functioning through the conscious purposes or the unconscious activities of men, could be counted on to safeguard mankind against future hazards. … At the present moment the world seems indeed out of joint, and it is difficult to believe with any conviction that a power not ourselves … will ever set it right. (Emphasis added.) The other was the idea that moral progress was so closely connected to material progress that they would always move together. Condorcet believed that prosperity would “naturally dispose men to humanity, to benevolence and to justice,” and that “nature has connected, by a chain which cannot be broken, truth, happiness, and virtue.” The 20th century, with the outbreak of world war and the rise of totalitarianism, proved these ideas disastrously wrong. “Progress” in the 21st century and beyond To move forward, we need a wiser, more mature idea of progress. Progress is not automatic or inevitable. It depends on choice and effort. It is up to us. Progress is not automatically good. It must be steered. Progress always creates new problems, and they don’t get solved automatically. Solving them requires active focus and effort, and this is a part of progress, too. Material progress does not automatically lead to moral progress. Technology within an evil social system can do more harm than good. We must commit to improving morality and society along with science, technology, and industry. With these lessons well learned, we can rescue the idea of progress and carry it forward into the 21st century and beyond.
What is the ideal size of the human population? One common answer is “much smaller.” Paul Ehrlich, co-author of The Population Bomb (1968), has as recently as 2018 promoted the idea that “the world’s optimum population is less than two billion people,” a reduction of the current population by about 75%. And Ehrlich is a piker compared to Jane Goodall, who said that many of our problems would go away “if there was the size of population that there was 500 years ago”—that is, around 500 million people, a reduction of over 90%. This is a static ideal of a “sustainable” population. Regular readers of this blog can cite many objections to this view. Resources are not static. Historically, as we run out of a resource (whale oil, elephant tusks, seabird guano), we transition to a new technology based on a more abundant resource—and there are basically no major examples of catastrophic resource shortages in the industrial age. The carrying capacity of the planet is not fixed, but a function of technology; and side effects such as pollution or climate change are just more problems to be solved. As long as we can keep coming up with new ideas, growth can continue. But those are only reasons why a larger population is not a problem. Is there a positive reason to want a larger population? I’m going to argue yes—that the ideal human population is not “much smaller,” but “ever larger.” Selfish reasons to want more humans Let me get one thing out of the way up front. One argument for a larger population is based on utilitarianism, specifically the version of it that says that what is good is the sum total of happiness across all humans. If each additional life adds to the cosmic scoreboard of goodness, then it’s obviously better to have more people (unless they are so miserable that their lives are literally not worth living). I’m not going to argue from this premise, in part because I don’t need to and more importantly because I don’t buy it myself. (Among other things, it leads to paradoxes such as the idea that a population of thriving, extremely happy people is not as good as a sufficiently-larger population of people who are just barely happy.) Instead, I’m going to argue that a larger population is better for every individual—that there are selfish reasons to want more humans. First I’ll give some examples of how this is true, and then I’ll draw out some of the deeper reasons for it. More geniuses First, more people means more outliers—more super-intelligent, super-creative, or super-talented people, to produce great art, architecture, music, philosophy, science, and inventions. If genius is defined as one-in-a-million level intelligence, then every billion people means another thousand geniuses—to work on all of the problems and opportunities of humanity, to the benefit of all. More progress A larger population means faster scientific, technical, and economic progress, for several reasons: Total investment. More people means more total R&D: more researchers, and more surplus wealth to invest in it. Specialization. In the economy generally, the division of labor increases productivity, as each worker can specialize and become expert at their craft (“Smithian growth”). In R&D, each researcher can specialize in their field. Larger markets support more R&D investment, which lets companies pick off higher-hanging fruit. I’ve given the example of the threshing machine: it was difficult enough to manufacture that it didn’t pay for a local artisan to make them only for their town, but it was profitable to serve a regional market. Alex Tabarrok gives the example of the market for cancer drugs expanding as large countries such as India and China become wealthier. Very high production-value entertainment, such as movies, TV, and games, are possible only because they have mass audiences. More ambitious projects need a certain critical mass of resources behind them. Ancient Egyptian civilization built a large irrigation system to make the best use of the Nile floodwaters for agriculture, a feat that would not have been possible to a small tribe or chiefdom. The Apollo Program, at its peak in the 1960s, took over 4% of the US federal budget, but 4% would not have been enough if the population and the economy were half the size. If someday humanity takes on a grand project such as a space elevator or a Dyson sphere, it will require an enormous team and an enormous wealth surplus to fund them. In fact, these factors may represent not only opportunities but requirements for progress. There is evidence that simply to maintain a constant rate of exponential economic growth requires exponentially growing investment in R&D. This investment is partly financial capital, but also partly human capital—that is, we need an exponentially growing base of researchers. One way to understand this is that if each researcher can push forward a constant “surface area” of the frontier, then as the frontier expands, a larger number of researchers is needed to keep pushing all of it forward. Two hundred years ago, a small number of scientists were enough to investigate electrical and magnetic phenomena; today, millions of scientists and engineers are productively employed working out all of the details and implications of those phenomena, both in the lab and in the electrical, electronics, and computer hardware and software industries. But it’s not even clear that each researcher can push forward a constant surface area of the frontier. As that frontier moves further out, the “burden of knowledge” grows: each researcher now has to study and learn more in order to even get to the frontier. Doing so might force them to specialize even further. Newton could make major contributions to fields as diverse as gravitation and optics, because the very basics of those fields were still being figured out; today, a researcher might devote their whole career to a sub-sub-discipline such as nuclear astrophysics. But in the long run, an exponentially growing base of researchers is impossible without an exponentially growing population. In fact, in some models of economic growth, the long-run growth rate in per-capita GDP is directly proportional to the growth rate of the population. More options Even setting aside growth and progress—looking at a static snapshot of a society—a world with more people is a world with more choices, among greater variety: Better matching for aesthetics, style, and taste. A bigger society has more cuisines, more architectural styles, more types of fashion, more sub-genres of entertainment. This also improves as the world gets more connected: for instance, the wide variety of ethnic restaurants in every major city is a recent phenomenon; it was only decades ago that pizza, to Americans, was an unfamiliar foreign cuisine. Better matching to careers. A bigger economy has more options for what to do with your life. In a hunter-gatherer society, you are lucky if you get to decide whether to be a hunter or a gatherer. In an agricultural economy, you’re probably going to be a farmer, or maybe some sort of artisan. Today there’s a much wider set of choices, from pilot to spreadsheet jockey to lab technician. Better matching to other people. A bigger world gives you a greater chance to find the perfect partner for you: the best co-founder for your business, the best lyricist for your songs, the best partner in marriage. More niche communities. Whatever your quirky interest, worldview, or aesthetic—the more people you can be in touch with, the more likely you are to find others like you. Even if you’re one in a million, in a city of ten million people, there are enough of you for a small club. In a world of eight billion, there are enough of you for a thriving subreddit. More niche markets. Similarly, in a larger, more connected economy, there are more people to economically support your quirky interests. Your favorite Etsy or Patreon creator can find the “one thousand true fans” they need to make a living. Deeper patterns When I look at the above, here are some of the underlying reasons: The existence of non-rival goods. Rival goods need to be divided up; more people just create more competition for them. But non-rival goods can be shared by all. A larger population and economy, all else being equal, will produce more non-rival goods, which benefits everyone. Economies of scale. In particular, often total costs are a combination of fixed and variable costs. The more output, the more the fixed costs can be amortized, lowering average cost. Network effects and Metcalfe’s law. Value in a network is generated not by nodes but by connections, and the more nodes there are total, the more connections are possible per node. Metcalfe’s law quantifies this: the number of possible connections in a network is proportional to the square of the number of nodes. All of these create agglomeration effects: bigger societies are better for everyone. A dynamic world I assume that when Ehrlich and Goodall advocate for much smaller populations, they aren’t literally calling for genocide or hoping for a global catastrophe (although Ehrlich is happy with coercive fertility control programs, and other anti-humanists have expressed hope for “the right virus to come along”). Even so, the world they advocate is a greatly impoverished and stagnant one: a world with fewer discoveries, fewer inventions, fewer works of creative genius, fewer cures for fewer diseases, fewer choices, fewer soulmates. A world with a large and growing population is a dynamic world that can create and sustain progress. For a different angle on the same thesis, see “Forget About Overpopulation, Soon There Will Be Too Few Humans,” by Roots of Progress fellow Maarten Boudry.
On Thursday, February 29, I’ll be giving my talk “Towards a New Philosophy of Progress” to the New England Legal Foundation, for their Economic Liberty Speaker Series. The talk will be held over breakfast at NELF’s offices in Boston, and will also be livestreamed over Zoom. See details and register here. This is a talk I have given before in other venues. The description: Enlightenment thinkers were tremendously optimistic about the potential for human progress: not only in science and technology, but also in morality and society. This belief lasted through the 19th century—but in the 20th century, after the World Wars, it gave way to fear, skepticism, and distrust. Now, in the 21st century, we need a new way forward: a new philosophy of progress. What events and ideas challenged the concept of progress? How can we restore it on a sound foundation? And how can we establish a bold, ambitious vision for the future?
When Galileo wanted to study the heavens through his telescope, he got money from those legendary patrons of the Renaissance, the Medici. To win their favor, when he discovered the moons of Jupiter, he named them the Medicean Stars. Other scientists and inventors offered flashy gifts, such as Cornelis Drebbel’s perpetuum mobile (a sort of astronomical clock) given to King James, who made Drebbel court engineer in return. The other way to do research in those days was to be independently wealthy: the Victorian model of the gentleman scientist. Meisterdrucke Eventually we decided that requiring researchers to seek wealthy patrons or have independent means was not the best way to do science. Today, researchers, in their role as “principal investigators” (PIs), apply to science funders for grants. In the US, the NIH spends nearly $48B annually, and the NSF over $11B, mainly to give such grants. Compared to the Renaissance, it is a rational, objective, democratic system. However, I have come to believe that this principal investigator model is deeply broken and needs to be replaced. That was the thought at the top of my mind coming out of a working group on “Accelerating Science” hosted by the Santa Fe Institute a few months ago. (The thoughts in this essay were inspired by many of the participants, but I take responsibility for any opinions expressed here. My thinking on this was also influenced by a talk given by James Phillips at a previous metascience conference. My own talk at the workshop was written up here earlier.) What should we do instead of the PI model? Funding should go in a single block to a relatively large research organization of, say, hundreds of scientists. This is how some of the most effective, transformative labs in the world have been organized, from Bell Labs to the MRC Laboratory of Molecular Biology. It has been referred to as the “block funding” model. Here’s why I think this model works: Specialization A principal investigator has to play multiple roles. They have to do science (researcher), recruit and manage grad students or research assistants (manager), maintain a lab budget (administrator), and write grants (fundraiser). These are different roles, and not everyone has the skill or inclination to do them all. The university model adds teaching, a fifth role. The block organization allows for specialization: researchers can focus on research, managers can manage, and one leader can fundraise for the whole org. This allows each person to do what they are best at and enjoy, and it frees researchers from spending 30–50% of their time writing grants, as is typical for PIs. I suspect it also creates more of an opportunity for leadership in research. Research leadership involves having a vision for an area to explore that will be highly fruitful—semiconductors, molecular biology, etc.—and then recruiting talent and resources to the cause. This seems more effective when done at the block level. Side note: the distinction I’m talking about here, between block funding and PI funding, doesn’t say anything about where the funding comes from or how those decisions are made. But today, researchers are often asked to serve on committees that evaluate grants. Making funding decisions is yet another role we add to researchers, and one that also deserves to be its own specialty (especially since having researchers evaluate their own competitors sets up an inherent conflict of interest). Research freedom and time horizons There’s nothing inherent to the PI grant model that dictates the size of the grant, the scope of activities it covers, the length of time it is for, or the degree of freedom it allows the researcher. But in practice, PI funding has evolved toward small grants for incremental work, with little freedom for the researcher to change their plans or strategy. I suspect the block funding model naturally lends itself to larger grants for longer time periods that are more at the vision level. When you’re funding a whole department, you’re funding a mission and placing trust in the leadership of the organization. Also, breakthroughs are unpredictable, but the more people you have working on things, the more regularly they will happen. A lab can justify itself more easily with regular achievements. In this way one person’s accomplishment provides cover to those who are still toiling away. Who evaluates researchers In the PI model, grant applications are evaluated by funding agencies: in effect, each researcher is evaluated by the external world. In the block model, a researcher is evaluated by their manager and their peers. James Phillips illustrates with a diagram: James Phillips A manager who knows the researcher well, who has been following their work closely, and who talks to them about it regularly, can simply make better judgments about who is doing good work and whose programs have potential. (And again, developing good judgment about researchers and their potential is a specialized role—see point 1). Further, when a researcher is evaluated impersonally by an external agency, they need to write up their work formally, which adds overhead to the process. They need to explain and justify their plans, which leads to more conservative proposals. They need to show outcomes regularly, which leads to more incremental work. And funding will disproportionately flow to people who are good at fundraising (which, again, deserves to be a specialized role). To get scientific breakthroughs, we want to allow talented, dedicated people to pursue hunches for long periods of time. This means we need to trust the process, long before we see the outcome. Several participants in the workshop echoed this theme of trust. Trust like that is much stronger when based on a working relationship, rather than simply on a grant proposal. If the block model is a superior alternative, how do we move towards it? I don’t have a blueprint. I doubt that existing labs will transform themselves into this model. But funders could signal their interest in funding labs like this, and new labs could be created or proposed on this model and seek such funding. I think the first step is spreading this idea.
In December, I went to the Foresight Institute’s Vision Weekend 2023 in San Francisco. I had a lot of fun talking to a bunch of weird and ambitious geeks about the glorious abundant technological future. Here are few things I learned about (with the caveat that this is mostly based on informal conversations with only basic fact-checking, not deep research): Cellular reprogramming Aging doesn’t only happen to your body: it happens at the level of individual cells. Over time, cells accumulate waste products and undergo epigenetic changes that are markers of aging. But wait—when a baby is born, it has young cells, even though it grew out of cells that were originally from its older parents. That is, the egg and sperm cells might be 20, 30, or 40 years old, but somehow when they turn into a baby, they get reset to biological age zero. This process is called “reprogramming,” and it happens soon after fertilization. It turns out that cell reprogramming can be induced by certain proteins, known as the Yamanaka factors, after their discoverer (who won a Nobel for this in 2012). Could we use those proteins to reprogram our own cells, making them youthful again? Maybe. There is a catch: the Yamanaka factors not only clear waste out of cells, they also reset them to become stem cells. You do not want to turn every cell in your body into a stem cell. You don’t even want to turn a small number of them into stem cells: it can give you cancer (which kind of defeats the purpose of a longevity technology). But there is good news: when you expose cells to the Yamanaka factors, the waste cleanup happens first, and the stem cell transformation happens later. If we can carefully time the exposure, maybe we can get the target effect without the damaging side effects. This is tricky: different tissues respond on different timelines, so you can’t apply the treatment uniformly over the body. There are a lot of details to be worked out here. But it’s an intriguing line of research for longevity, and it’s one of the avenues being explored at Retro Bio, among other places. Here’s a Derek Lowe article with more info and references. The BFG orbital launch system If we’re ever going to have a space economy, it has to be a lot cheaper to launch things into space. Space Shuttle launches cost over $65,000/kg, and even the Falcon Heavy costs $1500/kg. Compare to shipping costs on Earth, which are only a few dollars per kilogram. A big part of the high launch cost in traditional systems is the rocket, which is discarded with each launch. SpaceX is bringing costs down by making reusable rockets that land gently rather than crashing into the ocean, and by making very big rockets for economies of scale (Elon Musk has speculated that Starship could bring costs as low as $10/kg, although this is a ways off, since right now fuel costs alone are close to that amount). But what if we didn’t need a rocket at all? Rockets are pretty much our only option for propulsion in space, but what if we could give most of the impulse to the payload on Earth? J. Storrs Hall has proposed the “space pier,” a runway 300 km long mounted atop towers 100 km tall. The payload takes an elevator 100 km up to the top of the tower, thus exiting the atmosphere and much of Earth’s gravity well. Then a linear induction motor accelerates it into orbit along the 300 km track. You could do this with a mere 10 Gs of acceleration, which is survivable by human passengers. Think of it like a Big Friendly Giant (BFG) picking up your payload and then throwing it into orbit. Hall estimates that this could bring launch costs down to $10/kg, if the pier could be built for a mere $10 billion. The only tiny little catch with the space pier is that there is no technology in existence that could build it, and no construction material that a 100 km tower could be made of. Hall suggests that with “mature nanotechnology” we could build the towers out of diamond. OK. So, probably not going to happen this decade. What can we do now, with today’s technology? Let’s drop the idea of using this for human passengers and just consider relatively durable freight. Now we can use much higher G-forces, which means we don’t need anything close to 300 km of distance to accelerate over. And, does it really have to be 100 km tall? Yes, it’s nice to start with an altitude advantage, and with no atmosphere, but both of those problems can be overcome with sufficient initial velocity. At this point we’re basically just talking about an enormous cannon (a very different kind of BFG). This is what Longshot Space is doing. Build a big long tube in the desert. Put the payload in it, seal the end with a thin membrane, and pump the air out to create a vacuum. Then rapidly release some compressed gasses behind the payload, which bursts through the membrane and exits the tube at Mach 25. One challenge with this is that a gas can only expand as fast as the speed of sound in that gas. In air this is, of course, a lot less than Mach 25. One thing that helps is to use a lighter gas, in which the speed of sound is higher, such as helium or (for the very brave) hydrogen. Another part of the solution is to give the payload a long, wedge-shaped tail. The expanding gasses push sideways on this tail, which through the magic of simple machines translates into a much faster push forwards. There’s a brief discussion and illustration of the pneumatics in this video. Now, if you are trying to envision “big long tube in the desert”, you might be wondering: is the tube angled upwards or something? No. It is basically lying flat on the ground. It is expensive to build a long straight thing that points up: you have to dig a deep hole and/or build a tall tower. What about putting it on the side of a mountain, which naturally points up? Building things on mountains is also hard; in addition, mountains are special and nobody wants to give you one. It’s much easier to haul lots of materials into the middle of the desert; also there is lots of room out there and the real estate is cheap. Next you might be wondering: if the tube is horizontal, isn’t it pointed in the wrong direction to get to space? I thought space was up? Well, yes. There are a few things going on here. One is that if you travel far enough in a straight line, the Earth will curve away from you and you will eventually find yourself in space. Another is that if you shape the projectile such that its center of pressure is in the right place relative to its center of mass, then it will naturally angle upward when it hits the atmosphere. Lastly, if you are trying to get into orbit, most of the velocity you need is actually horizontal anyway. In fact, if and when you reach a circular orbit, you will find that all of your velocity is horizontal. This means that there is no way to get into orbit purely ballistically, with a single impulse imparted from Earth. Any satellite, for instance, launched via this system will need its own rocket propulsion in order to circularize the orbit once it reaches altitude (even leaving aside continual orbital adjustments during its service lifetime). But we’re now talking about a relatively small rocket with a small amount of fuel, not the big multi-stage things that you need to blast off from the surface. And presumably someday we will be delivering food, fuel, tools, etc. to space in packages that just need to be caught by whoever is receiving them. Longshot estimates that this system, like Starship or the space pier, could get launch costs down to about $10/kg. This might be cheap enough that launch prices could be zero, subsidized by contracts to buy fuel or maintenance, in a space-age version of “give away the razor and sell the blades.” Not only would this business model help grow the space economy, it would also prove wrong all the economists who have been telling us for decades that “there’s no such thing as a free launch.” Mars could be terraformed in our lifetimes Terraforming a planet sounds like a geological process, and so I had sort of thought that it would require geological timescales, or if it could really be accelerated, at least a matter of centuries or so. You drop off some algae or something on a rocky planet, and then your distant descendants return one day to find a verdant paradise. So I was surprised to learn that major changes on Mars could, in principle, be made on a schedule much shorter than a single human lifespan. Let’s back up. Mars is a real fixer-upper of a planet. Its temperature varies widely, averaging about −60º C; its atmosphere is thin and mostly carbon dioxide. This severely depresses its real estate values. Suppose we wanted to start by significantly warming the planet. How do you do that? Let’s assume Mars’s orbit cannot be changed—I mean, we’re going to get in enough trouble with the Sierra Club as it is—so the total flux of solar energy reaching the planet is constant. What we can do is to trap a bit more of that energy on the planet, and prevent it from radiating out into space. In other words, we need to enhance Mars’s greenhouse effect. And the way to do that is to give it a greenhouse gas. Wait, we just said that Mars’s atmosphere is mostly CO2, which is a notorious greenhouse gas, so why isn’t Mars warm already? It’s just not enough: the atmosphere is very thin (less than 1% of the pressure of Earth’s atmosphere), and what CO2 there is only provides about 5º of warming. We’re going to need to add more GHG. What could it be? Well, for starters, given the volumes required, it should be composed of elements that already exist on Mars. With the ingredients we have, what can we make? Could we get more CO2 in the atmosphere? There is more CO2 on/under the surface, in frozen form, but even that is not enough for the task. We need something else. What about CFCs? As a greenhouse gas, they are about four orders of magnitude more efficient than CO2, so we’d need a lot less of them. However, they require fluorine, which is very rare in the Martian soil, and we’d still need about 100 gigatons of it. This is not encouraging. One thing Mars does have a good amount of is metal, such as iron, aluminum, and magnesium. Now metals, you might be thinking, are not generally known as greenhouse gases. But small particles of conductive metal, with the right size and shape, can act as one. A recent paper found through simulation that “nanorods” about 9 microns long, half the wavelength of the infrared thermal radiation given off by a planet, would scatter that radiation back to the surface (Ansari, Kite, Ramirez, Steele, and Mohseni, “Warming Mars with artificial aerosol appears to be feasible”—no preprint online, but this poster seems to represent earlier work). Suppose we aim to warm the planet by about 30º C, enough to melt surface water in the polar regions during the summer, and bring Mars much closer to Earth temperatures. AKRSM’s simulation says that we would need to put about 400 mg/m3 of nanorods into the Martian sky, an efficiency (in warming per unit mass) more than 2000x greater than previously proposed methods. The particles would settle out of the atmosphere slowly, at less than 1/100 the rate of natural Mars dust, so only about 30 liters/sec of them would need to be released continuously. If we used iron, this would require mining a million cubic meters of iron per year—quite a lot, but less than 1% of what we do on Earth. And the particles, like other Martian dust, would be lifted high in the atmosphere by updrafts, so they could be conveniently released from close to the surface. Wouldn’t metal nanoparticles be potentially hazardous to breathe? Yes, but this is already a problem from Mars’s naturally dusty atmosphere, and the nanorods wouldn’t make it significantly worse. (However, this will have to be solved somehow if we’re going to make Mars habitable.) Kite told me that if we started now, given the capabilities of Starship, we could achieve the warming in a mere twenty years. Most of that time is just getting equipment to Mars, mining the iron, manufacturing the nanorods, and then waiting about a year for Martian winds to mix them throughout the atmosphere. Since Mars has no oceans to provide thermal inertia, the actual warming after that point only takes about a month. Kite is interested in talking to people about the design of a the nanorod factory. He wants to get a size/weight/power estimate and an outline design for the factory, to make an initial estimate of how many Starship landings would be needed. Contact him at edwin.kite@gmail.com. I have not yet gotten Kite and Longshot together to figure out if we can shoot the equipment directly to Mars using one really enormous space cannon. Thanks to Reason, Mike Grace, and Edwin Kite for conversations and for commenting on a draft of this essay. Any errors or omissions above are entirely my own.
More in science
[Note that this article is a transcript of the video embedded above.] Wichita Falls, Texas, went through the worst drought in its history in 2011 and 2012. For two years in a row, the area saw its average annual rainfall roughly cut in half, decimating the levels in the three reservoirs used for the city’s water supply. Looking ahead, the city realized that if the hot, dry weather continued, they would be completely out of water by 2015. Three years sounds like a long runway, but when it comes to major public infrastructure projects, it might as well be overnight. Between permitting, funding, design, and construction, three years barely gets you to the starting line. So the city started looking for other options. And they realized there was one source of water nearby that was just being wasted - millions of gallons per day just being flushed down the Wichita River. I’m sure you can guess where I’m going with this. It was the effluent from their sewage treatment plant. The city asked the state regulators if they could try something that had never been done before at such a scale: take the discharge pipe from the wastewater treatment plant and run it directly into the purification plant that produces most of the city’s drinking water. And the state said no. So they did some more research and testing and asked again. By then, the situation had become an emergency. This time, the state said yes. And what happened next would completely change the way cities think about water. I’m Grady and this is Practical Engineering. You know what they say, wastewater happens. It wasn’t that long ago that raw sewage was simply routed into rivers, streams, or the ocean to be carried away. Thankfully, environmental regulations put a stop to that, or at least significantly curbed the amount of wastewater being set loose without treatment. Wastewater plants across the world do a pretty good job of removing pollutants these days. In fact, I have a series of videos that go through some of the major processes if you want to dive deeper after this. In most places, the permits that allow these plants to discharge set strict limits on contaminants like organics, suspended solids, nutrients, and bacteria. And in most cases, they’re individualized. The permit limits are based on where the effluent will go, how that water body is used, and how well it can tolerate added nutrients or pollutants. And here’s where you start to see the issue with reusing that water: “clean enough” is a sliding scale. Depending on how water is going to be used or what or who it’s going to interact with, our standards for cleanliness vary. If you have a dog, you probably know this. They should drink clean water, but a few sips of a mud puddle in a dirty street, and they’re usually just fine. For you, that might be a trip to the hospital. Natural systems can tolerate a pretty wide range of water quality, but when it comes to drinking water for humans, it should be VERY clean. So the easiest way to recycle treated wastewater is to use it in ways that don’t involve people. That idea’s been around for a while. A lot of wastewater treatment plants apply effluent to land as a disposal method, avoiding the need for discharge to a natural water body. Water soaks into the ground, kind of like a giant septic system. But that comes with some challenges. It only works if you’ve got a lot of land with no public access, and a way to keep the spray from drifting into neighboring properties. Easy at a small scale, but for larger plants, it just isn’t practical engineering. Plus, the only benefits a utility gets from the effluent are some groundwater recharge and maybe a few hay harvests per season. So, why not send the effluent to someone else who can actually put it to beneficial use? If only it were that simple. As soon as a utility starts supplying water to someone else, things get complicated because you lose a lot of control over how the effluent is used. Once it's out of your hands, so to speak, it’s a lot harder to make sure it doesn’t end up somewhere it shouldn’t, like someone’s mouth. So, naturally, the permitting requirements become stricter. Treatment processes get more complicated and expensive. You need regular monitoring, sampling, and laboratory testing. In many places in the world, reclaimed water runs in purple pipes so that someone doesn’t inadvertently connect to the lines thinking they’re potable water. In many cases, you need an agreement in place with the end user, making sure they’re putting up signs, fences, and other means of keeping people from drinking the water. And then you need to plan for emergencies - what to do if a pipe breaks, if the effluent quality falls below the standards, or if a cross-connection is made accidentally. It’s a lot of work - time, effort, and cost - to do it safely and follow the rules. And those costs have to be weighed against the savings that reusing water creates. In places that get a lot of rain or snow, it’s usually not worth it. But in many US states, particularly those in the southwest, this is a major strategy to reduce the demand on fresh water supplies. Think about all the things we use water for where its cleanliness isn’t that important. Irrigation is a big one - crops, pastures, parks, highway landscaping, cemeteries - but that’s not all. Power plants use huge amounts of water for cooling. Street sweeping, dust control. In nearly the entire developed world, we use drinking-quality water to flush toilets! You can see where there might be cases where it makes good sense to reclaim wastewater, and despite all the extra challenges, its use is fairly widespread. One of the first plants was built in 1926 at Grand Canyon Village which supplied reclaimed water to a power plant and for use in steam locomotives. Today, these systems can be massive, with miles and miles of purple pipes run entirely separate from the freshwater piping. I’ve talked about this a bit on the channel before. I used to live near a pair of water towers in San Antonio that were at two different heights above ground. That just didn’t make any sense until I realized they weren’t connected; one of them was for the reclaimed water system that didn’t need as much pressure in the lines. Places like Phoenix, Austin, San Antonio, Orange County, Irvine, and Tampa all have major water reclamation programs. And it’s not just a US thing. Abu Dhabi, Beijing, and Tel Aviv all have infrastructure to make beneficial use of treated municipal wastewater, just to name a few. Because of the extra treatment and requirements, many places put reclaimed water in categories based on how it gets used. The higher the risk of human contact, the tighter the pollutant limits get. For example, if a utility is just selling effluent to farmers, ranchers, or for use in construction, exposure to the public is minimal. Disinfecting the effluent with UV or chlorine may be enough to meet requirements. And often that’s something that can be added pretty simply to an existing plant. But many reclaimed water users are things like golf courses, schoolyards, sports fields, and industrial cooling towers, where people are more likely to be exposed. In those cases, you often need a sewage plant specifically designed for the purpose or at least major upgrades to include what the pros call tertiary treatment processes - ways to target pollutants we usually don’t worry about and improve the removal rates of the ones we do. These can include filters to remove suspended solids, chemicals that bind to nutrients, and stronger disinfection to more effectively kill pathogens. This creates a conundrum, though. In many cases, we treat wastewater effluent to higher standards than we normally would in order to reclaim it, but only for nonpotable uses, with strict regulations about human contact. But if it’s not being reclaimed, the quality standards are lower, and we send it downstream. If you know how rivers work, you probably see the inconsistency here. Because in many places, down the river, is the next city with its water purification plant whose intakes, in effect, reclaim that treated sewage from the people upstream. This isn’t theoretical - it’s just the reality of how humans interact with the water cycle. We’ve struggled with the problems it causes for ages. In 1906, Missouri sued Illinois in the Supreme Court when Chicago reversed their river, redirecting its water (and all the city’s sewage) toward the Mississippi River. If you live in Houston, I hate to break it to you, but a big portion of your drinking water comes from the flushes and showers in Dallas. There have been times when wastewater effluent makes up half of the flow in the Trinity River. But the question is: if they can do it, why can’t we? If our wastewater effluent is already being reused by the city downstream to purify into drinking water, why can’t we just keep the effluent for ourselves and do the same thing? And the answer again is complicated. It starts with what’s called an environmental buffer. Natural systems offer time to detect failures, dilute contaminants, and even clean the water a bit—sunlight disinfects, bacteria consume organic matter. That’s the big difference in one city, in effect, reclaiming water from another upstream. There’s nature in between. So a lot of water reclamation systems, called indirect potable reuse, do the same thing: you discharge the effluent into a river, lake, or aquifer, then pull it out again later for purification into drinking water. By then, it’s been diluted and treated somewhat by the natural systems. Direct potable reuse projects skip the buffer and pipe straight from one treatment plant to the next. There’s no margin for error provided by the environmental buffer. So, you have to engineer those same protections into the system: real-time monitoring, alarms, automatic shutdowns, and redundant treatment processes. Then there’s the issue of contaminants of emerging concern: pharmaceuticals, PFAS [P-FAS], personal care products - things that pass through people or households and end up in wastewater in tiny amounts. Individually, they’re in parts per billion or trillion. But when you close the loop and reuse water over and over, those trace compounds can accumulate. Many of these aren’t regulated because they’ve never reached concentrations high enough to cause concern, or there just isn’t enough knowledge about their effects yet. That’s slowly changing, and it presents a big challenge for reuse projects. They can be dealt with at the source by regulating consumer products, encouraging proper disposal of pharmaceuticals (instead of flushing them), and imposing pretreatment requirements for industries. It can also happen at the treatment plant with advanced technologies like reverse osmosis, activated carbon, advanced oxidation, and bio-reactors that break down micro-contaminants. Either way, it adds cost and complexity to a reuse program. But really, the biggest problem with wastewater reuse isn’t technical - it’s psychological. The so-called “yuck factor” is real. People don’t want to drink sewage. Indirect reuse projects have a big benefit here. With some nature in between, it’s not just treated wastewater; it’s a natural source of water with treated wastewater in it. It’s kind of a story we tell ourselves, but we lose the benefit of that with direct reuse: Knowing your water came from a toilet—even if it’s been purified beyond drinking water standards—makes people uneasy. You might not think about it, but turning the tap on, putting that water in a glass, and taking a drink is an enormous act of trust. Most of us don’t understand water treatment and how it happens at a city scale. So that trust that it’s safe to drink largely comes from seeing other people do it and past experience of doing it over and over and not getting sick. The issue is that, when you add one bit of knowledge to that relative void of understanding - this water came directly from sewage - it throws that trust off balance. It forces you not to rely not on past experience but on the people and processes in place, most of which you don’t understand deeply, and generally none of which you can actually see. It’s not as simple as just revulsion. It shakes up your entire belief system. And there’s no engineering fix for that. Especially for direct potable reuse, public trust is critical. So on top of the infrastructure, these programs also involve major public awareness campaigns. Utilities have to put themselves out there, gather feedback, respond to questions, be empathetic to a community’s values, and try to help people understand how we ensure water quality, no matter what the source is. But also, like I said, a lot of that trust comes from past experience. Not everyone can be an environmental engineer or licensed treatment plant operator. And let’s be honest - utilities can’t reach everyone. How many public meetings about water treatment have you ever attended? So, in many places, that trust is just going to have to be built by doing it right, doing it well, and doing it for a long time. But, someone has to be first. In the U.S., at least on the city scale, that drinking water guinea pig was Wichita Falls. They launched a massive outreach campaign, invited experts for tours, and worked to build public support. But at the end of the day, they didn’t really have a choice. The drought really was that severe. They spent nearly four years under intense water restrictions. Usage dropped to a third of normal demand, but it still wasn’t enough. So, in collaboration with state regulators, they designed an emergency direct potable reuse system. They literally helped write the rules as they went, since no one had ever done it before. After two months of testing and verification, they turned on the system in July 2014. It made national headlines. The project ran for exactly one year. Then, in 2015, a massive flood ended the drought and filled the reservoirs in just three weeks. The emergency system was always meant to be temporary. Water essentially went through three treatment plants: the wastewater plant, a reverse osmosis plant, and then the regular water purification plant. That’s a lot of treatment, which is a lot of expense, but they needed to have the failsafe and redundancy to get the state on board with the project. The pipe connecting the two plants was above ground and later repurposed for the city’s indirect potable reuse system, which is still in use today. In the end, they reclaimed nearly two billion gallons of wastewater as drinking water. And they did it with 100% compliance with the standards. But more importantly, they showed that it could be done, essentially unlocking a new branch on the skill tree of engineering that other cities can emulate and build on.
A version of this post originally appeared on Tedium, Ernie Smith’s newsletter, which hunts for the end of the long tail. For roughly three decades, the JPEG has been the World Wide Web’s primary image format. But it wasn’t the one the Web started with. In fact, the first mainstream graphical browser, NCSA Mosaic, didn’t initially support inline JPEG files—just inline GIFs, along with a couple of other formats forgotten to history. However, the JPEG had many advantages over the format it quickly usurped. aspect_ratio Despite not appearing together right away—it first appeared in Netscape in 1995, three years after the image standard was officially published—the JPEG and web browser fit together naturally. JPEG files degraded more gracefully than GIFs, retaining more of the picture’s initial form—and that allowed the format to scale to greater levels of success. While it wasn’t capable of animation, it progressively expanded from something a modem could pokily render to a format that was good enough for high-end professional photography. For the internet’s purposes, the degradation was the important part. But it wasn’t the only thing that made the JPEG immensely valuable to the digital world. An essential part was that it was a documented standard built by numerous stakeholders. The GIF was a de facto standard. The JPEG was an actual one How important is it that JPEG was a standard? Let me tell you a story. During a 2013 New York Times interview conducted just before he received an award honoring his creation, GIF creator Steve Wilhite stepped into a debate he unwittingly created. Simply put, nobody knew how to pronounce the acronym for the image format he had fostered, the Graphics Interchange Format. He used the moment to attempt to set the record straight—it was pronounced like the peanut butter brand: “It is a soft ‘G,’ pronounced ‘jif.’ End of story,” he said. I posted a quote from Wilhite on my popular Tumblr around that time, a period when the social media site was the center of the GIF universe. And soon afterward, my post got thousands of reblogs—nearly all of them disagreeing with Wilhite. Soon, Wilhite’s quote became a meme. The situation paints how Wilhite, who died in 2022, did not develop his format by committee. He could say it sounded like “JIF” because he built it himself. He was handed the project as a CompuServe employee in 1987; he produced the object, and that was that. The initial document describing how it works? Dead simple. 38 years later, we’re still using the GIF—but it never rose to the same prevalence of JPEG. The JPEG, which formally emerged about five years later, was very much not that situation. Far from it, in fact—it’s the difference between a de facto standard and an actual one. And that proved essential to its eventual ubiquity. We’re going to degrade the quality of this image throughout this article. At its full image size, it’s 13.7 megabytes.Irina Iriser How the JPEG format came to life Built with input from dozens of stakeholders, the Joint Photographic Experts Group ultimately aimed to create a format that fit everyone’s needs. (Reflecting its committee-led roots, there would be no confusion about the format’s name—an acronym of the organization that designed it.) And when the format was finally unleashed on the world, it was the subject of a more than 600-page book. JPEG: Still Image Data Compression Standard, written by IBM employees and JPEG organization stakeholders William B. Pennebaker and Joan L. Mitchell, describes a landscape of multimedia imagery, held back without a way to balance the need for photorealistic images and immediacy. Standardization, they believed, could fix this. “The problem was not so much the lack of algorithms for image compression (as there is a long history of technical work in this area),” the authors wrote, “but, rather, the lack of a standard algorithm—one which would allow an interchange of images between diverse applications.” And they were absolutely right. For more than 30 years, JPEG has made high-quality, high-resolution photography accessible in operating systems far and wide. Although we no longer need to compress JPEGs to within an inch of their life, having that capability helped enable the modern internet. As the book notes, Mitchell and Pennebaker were given IBM’s support to follow through this research and work with the JPEG committee, and that support led them to develop many of the JPEG format’s foundational patents. Described in patents filed by Mitchell and Pennebaker in 1988, IBM and other members of the JPEG standards committee, such as AT&T and Canon, were developing ways to use compression to make high-quality images easier to deliver in confined settings. Each member brought their own needs to the process. Canon, obviously, was more focused on printers and photography, while AT&T’s interests were tied to data transmission. Together, the companies left behind a standard that has stood the test of time. All this means, funnily enough, that the first place that a program capable of using JPEG compression appeared was not MacOS or Windows, but OS/2—a fascinating-but-failed graphical operating system created by Pennebaker and Mitchell’s employer, IBM. As early as 1990, OS/2 supported the format through the OS/2 Image Support application. At 50 percent of its initial quality, the image is down to about 2.6 MB. By dropping half of the image’s quality, we brought it down to one-fifth of the original file size. Original image: Irina Iriser What a JPEG does when you heavily compress it The thing that differentiates a JPEG file from a PNG or a GIF is how the data degrades as you compress it. The goal for a JPEG image is to still look like a photo when all is said and done, even if some compression is necessary to make it all work at a reasonable size. That way, you can display something that looks close to the original image in fewer bytes. Or, as Pennebaker and Mitchell put it, “the most effective compression is achieved by approximating the original image (rather than reproducing it exactly).” Central to this is a compression process called discrete cosine transform (DCT), a lossy form of compression encoding heavily used in all sorts of compressed formats, most notably in digital audio and signal processing. Essentially, it delivers a lower-quality product by removing details, while still keeping the heart of the original product through approximation. The stronger the cosine transformation, the more compressed the final result. The algorithm, developed by researchers in the 1970s, essentially takes a grid of data and treats it as if you’re controlling its frequency with a knob. The data rate is controlled like water from a faucet: The more data you want, the higher the setting. DCT allows a trickle of data to still come out in highly compressed situations, even if it means a slightly compromised result. In other words, you may not keep all the data when you compress it, but DCT allows you to keep the heart of it. (See this video for a more technical but still somewhat easy-to-follow description of DCT.) DCT is everywhere. If you have ever seen a streaming video or an online radio stream that degraded in quality because your bandwidth suddenly declined, you’ve witnessed DCT being utilized in real time. A JPEG file doesn’t have to leverage the DCT with just one method, as JPEG: Still Image Data Compression Standard explains: The JPEG standard describes a family of large image compression techniques, rather than a single compression technique. It provides a “tool kit” of compression techniques from which applications can select elements that satisfy their particular requirements. The toolkit has four modes: Sequential DCT, which displays the compressed image in order, like a window shade slowly being rolled down Progressive DCT, which displays the full image in the lowest-resolution format, then adds detail as more information rolls in Sequential lossless, which uses the window shade format but doesn’t compress the image Hierarchical mode, which combines the prior three modes—so maybe it starts with a progressive mode, then loads DCT compression slowly, but then reaches a lossless final result At the time the JPEG was being created, modems were extremely common. That meant images loaded slowly, making Progressive DCT the most fitting format for the early internet. Over time, the progressive DCT mode has become less common, as many computers can simply load the sequential DCT in one fell swoop. That same forest, saved at 5 percent quality. Down to about 419 kilobytes.Original image: Irina Iriser When an image is compressed with DCT, the change tends to be less noticeable in busier, more textured areas of the picture, like hair or foliage. Those areas are harder to compress, which means they keep their integrity longer. It tends to be more noticeable, however, with solid colors or in areas where the image sharply changes from one color to another—like text on a page. Ever screenshot a social media post, only for it to look noisy? Congratulations, you just made a JPEG file. Other formats, like PNG, do better with text, because their compression format is intended to be non-lossy. (Side note: PNG’s compression format, DEFLATE, was designed by Phil Katz, who also created the ZIP format. The PNG format uses it in part because it was a license-free compression format. So it turns out the brilliant coder with the sad life story improved the internet in multiple ways before his untimely passing.) In many ways, the JPEG is one tool in our image-making toolkit. Despite its age and maturity, it remains one of our best options for sharing photos on the internet. But it is not a tool for every setting—despite the fact that, like a wrench sometimes used as a hammer, we often leverage it that way. Forgent Networks claimed to own the JPEG’s defining algorithm The JPEG format gained popularity in the ’90s for reasons beyond the quality of the format. Patents also played a role: Starting in 1994, the tech company Unisys attempted to bill individual users who relied on GIF files, which used a patent the company owned. This made the free-to-use JPEG more popular. (This situation also led to the creation of the patent-free PNG format.) While the JPEG was standards-based, it could still have faced the same fate as the GIF, thanks to the quirks of the patent system. A few years before the file format came to life, a pair of Compression Labs employees filed a patent application that dealt with the compression of motion graphics. By the time anyone noticed its similarity to JPEG compression, the format was ubiquitous. Our forest, saved at 1 percent quality. This image is only about 239 KB in size, yet it’s still easily recognizable as the same photo. That’s the power of the JPEG.Original image: Irina Iriser Then in 1997, a company named Forgent Networks acquired Compression Labs. The company eventually spotted the patent and began filing lawsuits over it, a series of events it saw as a stroke of good luck. “The patent, in some respects, is a lottery ticket,” Forgent Chief Financial Officer Jay Peterson told CNET in 2005. “If you told me five years ago that ‘You have the patent for JPEG,’ I wouldn’t have believed it.” While Forgent’s claim of ownership of the JPEG compression algorithm was tenuous, it ultimately saw more success with its legal battles than Unisys did. The company earned more than $100 million from digital camera makers before the patent finally ran out of steam around 2007. The company also attempted to extract licensing fees from the PC industry. Eventually, Forgent agreed to a modest $8 million settlement. As the company took an increasingly aggressive approach to its acquired patent, it began to lose battles both in the court of public opinion and in actual courtrooms. Critics pounced on examples of prior art, while courts limited the patent’s use to motion-based uses like video. By 2007, Forgent’s compression patent expired—and its litigation-heavy approach to business went away. That year, the company became Asure Software, which now specializes in payroll and HR solutions. Talk about a reboot. Why the JPEG won’t die The JPEG file format has served us well. It’s been difficult to remove the format from its perch. The JPEG 2000 format, for example, was intended to supplant it by offering more lossless options and better performance. The format is widely used by the Library of Congress and specialized sites like the Internet Archive, however, it is less popular as an end-user format. See the forest JPEG degrade from its full resolution to 1 percent quality in this GIF. Original image: Irina Iriser Other image technologies have had somewhat more luck getting past the JPEG format. The Google-supported WebP is popular with website developers (and controversial with end users). Meanwhile, the formats AVIF and HEIC, each developed by standards bodies, have largely outpaced both JPEG and JPEG 2000. Still, the JPEG will be difficult to kill at this juncture. These days, the format is similar to MP3 or ZIP files—two legacy formats too popular and widely used to kill. Other formats that compress the files better and do the same things more efficiently are out there, but it’s difficult to topple a format with a 30-year head start. Shaking off the JPEG is easier said than done. I think most people will be fine to keep it around. Ernie Smith is the editor of Tedium, a long-running newsletter that hunts for the end of the long tail.
This is the first touchpoint for science, we should make it more enticing
By weaving together existing railway lines, some cities can get the best transit in the world