More from Damn Interesting
Edmund Lawall must have felt cursed. He’d brought his family to New York in the late 1800s to carry on his father’s business as a pharmacist, but fate—or perhaps the city itself—seemed determined to drive him back out again. Lawall’s health had been in decline since their arrival, and his wife’s kidney disease had worsened, despite all of the tinctures and patent medicines available to his turn-of-the-century expertise. Not long after that, his business partner had been revealed as a crook, sending Lawall scrambling into bankruptcy court to convince the judge that his pharmacy had nothing to do with shady real estate dealings. Then, in the midst of the bankruptcy proceedings, an anonymous woman had staggered into Lawall’s drug store, collapsed on the floor, and died of unknown causes. Likely no one could have saved her, but it wasn’t exactly a ringing endorsement of the pharmaceutical services available at the corner of Eighth Street and Avenue C. None of that compared, however, to the morning of 27 June 1906, when a disheveled man in a medical coat burst through the narrow glass doors of the pharmacy, begging for protection. He was immediately followed by a young man with a revolver, and an angry crowd screaming in Yiddish. Lawall didn’t speak the language, but there was no mistaking the young man’s intent as he strode purposely forward and raised his gun to the doctor’s head. It was a grim, but recognizable tableau: the young man’s stance and grip were confident, clearly marking him as a budding gangster. The behavior of the crowd, on the other hand, made no sense at all. Innocent bystanders tended to run away from gang violence, yet the pushcart vendors and housewives surrounding the apparent holdup were not frightened, or even appealing for mercy. They were shouting, quite insistently, for the doctor’s execution. And everything seemed to indicate they were going to get what they wanted. In the weeks that followed, blame would be pointed in nearly every direction—because at that moment, unbeknownst to Lawall, similar scenes were playing out all over the neighborhood, with other doctors, teachers, reporters, and even utility workers being assaulted by hordes of people howling at them in Yiddish. By the time it was over, the incident would be measured as one of the largest riots ever in New York City, and the confrontation at Lawall’s Pharmacy would be mentioned only in passing, if at all. Another name, however, would be repeated over and over again: Adeline E. Simpson, the principal of Public School No. 110. Continue reading ▶
Iceland is known to the rest of the world as the land of Vikings and volcanos, an island caught between continents at the extremities of the map. Remote and comparatively inhospitable, it was settled only as long ago as the 9th century, and has seen little additional in-migration since. Even today, more than 90 percent of Iceland’s 390,000 residents can trace their ancestry back to the earliest permanent inhabitants, a Nordic-Celtic mix. The tradition of the Norse sagas lives on in the form of careful record-keeping about ancestry—and a national passion for genealogy. In other words, it is not the place to stumble upon old family mysteries. But growing up in the capital city of Reykjavík in the 1950s, neurologist Dr. Kári Stefánsson heard stories that left him curious. Stefánsson’s father had come from Djúpivogur, an eastern coastal town where everyone still spoke of a Black man who had moved there early in the 19th century. “Hans Jónatan”, they called him—a well-liked shopkeeper who had arrived on a ship, married a spirited woman from a local farm, and became a revered member of the community. The local census did record a man by the name of Hans Jónatan, born in the Caribbean, who was working at the general store in Djúpivogur in the 19th century—but that was all. No images of the man had survived, and his time in Iceland was well before any other humans with African ancestry are known to have visited the island. If tiny, remote Djúpivogur did have a Black man arrive in the 19th century, the circumstances must have been unusual indeed. It was an intriguing puzzle—and solid grounds for a scientific investigation. Given the amount of homogeneity in the baseline Icelandic population, the genetic signature of one relative newcomer with distinct ancestry might still stand out across a large sample of his descendants. Geneticists thus joined locals and history scholars, and they pieced together a story that bridged three continents. Continue reading ▶
It’s been a busy summer, and the large shortfall in donations last month has been demoralizing, so we’re taking a week off to rest and recuperate. The curated links section will be (mostly) silent, and behind the scenes we’ll be taking a brief break from our usual researching, writing, editing, illustrating, narrating, sound designing, coding, et cetera. We plan to return to normalcy on the 11th of September. (The word “normalcy” was not considered an acceptable alternative to “normality” until 14 May 1920, when then-presidential-candidate Warren G. Harding misused the mathematical term in a campaign speech, stating that America needed, “not nostrums, but normalcy.” He then integrated this error into his campaign slogan, “Return to Normalcy.” Also, the G in Warren G. Harding stood for “Gamaliel.”) While we are away, on 06 September 2023, Damn Interesting will be turning 18 years old. To celebrate, here are the first emojis to ever appear in the body of a Damn Interesting post: 🎂🎉🎁 If you become bored while we are away, you might try a little mobile game we’ve been working on called Wordwhile. It can be played alone, or with a friend. If you enjoy games like Scrabble and Wordle, you may find this one ENJOYABLE (75 points). Launch Wordwhile → And, as always, there are lots of ways to explore our back-catalog. View this post ▶
We’re not going to post things on Twitter X anymore. The new owner keeps doing awful stuff. If you have enjoyed our mostly-daily curated links via the aforementioned collapsing service, we invite you to bookmark our curated links page, or follow us a number of other ways. Rather than linger any longer on this tedious topic, here are some home-grown dad jokes. If there is any order in this universe, the comments section will fill with more of the same. Q: What is the flavor of a chair? Do you even know the meaning of the word ‘rhetorical?’ Don’t answer that! My friend bought an alarm clock that makes loud farting sounds in the morning. He’s in for a rude awakening. You’re right, these ARE my orthopedic shoes. I stand corrected. I want a good game of hide and seek, but skilled players are hard to find. Like tight sweaters, corporate acquisitions are hard to pull off. I was offered a job at the mirror factory. I could see myself working there. Did you hear about the farmer in Colorado raising cannabis-fed cattle? The steaks are high. Q: What is the best stocking stuffer? I used to be addicted to soap, but I’ve gotten clean. I finally worked up the courage to tell my hot female coworker how I felt. She felt the same. So we turned down the thermostat. The universal remote: This changes everything. Q: How fast are donkey trucks? It smells like death in there, and not in a good way. My dad demanded that I go fetch some water from that deep hole in the ground. He means well. Calendar makers: Your days are numbered. A: I enjoy cooking with ghee, but I don’t buy it, I make my own. I will not rest until I find a cure for my insomnia. I bought my wife a new refrigerator. I can’t wait to see her face light up when she opens it. Did you hear about the hilarious thing that happened at the mandatory meeting? I guess you had to be there. Remember that sweet grandmother on Twitter who thought that ‘lol’ meant ‘lots of love’? “Sorry to hear about your uncle passing. lol.” Yesterday, we were standing at the edge of a cliff. Since then we have taken a huge step forward. We had to cancel the big game of tag because somebody got hurt. It was touch and go there for a while. “Of course you can count on me,” said the abacus. IBS is genetic, you know. Runs in the family. My grandfather once told me, “It’s worth investing in good speakers.” That was some sound advice. Extreme camping is in tents. The solar panel company wouldn’t let me pay for the installation. They said it was all on the house. I was chopping herbs all day, and now my hands are quite fragrant. I’ve got too much thyme on my hands. A weather balloon measures about 4 feet in diameter (adjusting for inflation). A: Have you ever had a flatulence-based tea? Like a German dietitian, I tend to see the wurst in people. I don’t care for rulers. That’s where I draw the line. Why did the farmer propose to his horse? He wanted a stable relationship. I still think whiteboards are one of mankind’s most remarkable inventions. The Earth has successfully rotated around its axis. Let’s call it a day. My daughter dropped a brand new tube of toothpaste and it made a big mess. She was crestfallen. You’ve got to hand it to customs agents: Your passport. My friend tried to steal a box of lipstick for us, but she accidentally grabbed a box of glue sticks. My lips are sealed. Elevators: They take things to a whole other level. A friend gave me an expired pack of batteries. They were free of charge. Comedy: To taste a bit like a comet. A: How many times do I have to apologize? My wife said that the battery in my hearing aid needed to be replaced. That was difficult to hear. I asked the ski lift operator if I could get a free ride to the top of the mountain. He didn’t take me up on it. What makes a sentence a tongue twister? It’s hard to say. If you visit Mexico, remember to use the word “mucho.” It means a lot to them. There are more hydrogen atoms in a single molecule of water than there are stars in the solar system. To whoever discovered the number zero: Thanks for nothing. View this post ▶
In the late 17th century, natural philosopher Isaac Newton was deeply uneasy with a new scientific theory that was gaining currency in Europe: universal gravitation. In correspondence with a scientific contemporary, Newton complained that it was “an absurdity” to suppose that “one body may act upon another at a distance through a vacuum.” The scientist who proposed this preposterous theory was Isaac Newton. He first articulated the idea in his widely acclaimed magnum opus Principia, wherein he explained, “I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses […] It is enough that gravity does really exist and acts according to the laws I have explained.” Newton proposed that celestial bodies were not the sole sources of gravity in the universe, rather all matter attracts all other matter with a force that corresponds to mass and diminishes rapidly with distance. He had been studying the motions of the six known planets–Mercury, Venus, Mars, Jupiter, Saturn, and Uranus–and by expanding upon the laws of planetary motion developed by Johannes Kepler about eight decades earlier, he arrived at an equation for gravitational force F that seemed to match decades of data: Where m1 and m2 are the masses of the objects, r is the distance between their centers of mass, and G is the gravitational constant (~0.0000000000667408). But this is only an approximation; humanity may never know the precise value because it is impossible to isolate any measuring apparatus from all of the gravity in the universe. Fellow astronomers found that Newton’s theory seemed to be accurate–universal gravitation appeared to reliably forecast the sometimes irregular motion of the planets even more closely than Kepler’s laws. In 1705, Queen Anne knighted Isaac Newton to make him Sir Isaac Newton (though this honor was due to his work in politics, not for his considerable contributions to math or science). In the century that followed, Newton’s universal gravitation performed flawlessly. Celestial bodies appeared to adhere to the elegant theory, and in scientific circles, it began to crystallize into a law of nature. But in the early 19th century, cracks began to appear. When astronomer Alexis Bouvard used Newton’s equations to carefully calculate future positions of Jupiter and Saturn, they proved spectacularly accurate. However, when he followed up in 1821 with astronomical tables for Uranus–the outermost known planet–subsequent observations revealed that the planet was crossing the sky substantially slower than projected. The fault was not in Bouvard’s math; Uranus appeared to be violating the law of universal gravitation. Newton’s theory was again called into question in 1843 by a 32-year-old assistant astronomer at the Paris Observatory, Urbain Le Verrier. Le Verrier had been following the Uranus perturbations with great interest, while also compiling a painstaking record of the orbit of Mercury–the innermost known planet. He found that Mercury also departed from projections made by universal gravitation. Was universal gravitation a flawed theory? Or might undiscovered planets lurk in extra-Uranian and intra-Mercurial space, disturbing the orbits of the known planets? Astronomers around the world scoured the skies, seeking out whatever was perturbing the solar system. The answer, it turned out, was more bizarre than they could have supposed. Continue reading ▶
More in science
Image generators are designed to mimic their training data, so where does their apparent creativity come from? A recent study suggests that it’s an inevitable by-product of their architecture. The post Researchers Uncover Hidden Ingredients Behind AI Creativity first appeared on Quanta Magazine
I participated in a program about 15 years ago that looked at science and technology challenges faced by a subset of the US government. I came away thinking that such problems fall into three broad categories. Actual science and engineering challenges, which require foundational research and creativity to solve. Technology that may be fervently desired but is incompatible with the laws of nature, economic reality, or both. Alleged science and engineering problems that are really human/sociology issues. Part of science and engineering education and training is giving people the skills to recognize which problems belong to which categories. Confusing these can strongly shape the perception of whether science and engineering research is making progress. There has been a lot of discussion in the last few years about whether scientific progress (however that is measured) has slowed down or stagnated. For example, see here: https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/ https://news.uchicago.edu/scientific-progress-slowing-james-evans https://www.forbes.com/sites/roberthart/2023/01/04/where-are-all-the-scientific-breakthroughs-forget-ai-nuclear-fusion-and-mrna-vaccines-advances-in-science-and-tech-have-slowed-major-study-says/ https://theweek.com/science/world-losing-scientific-innovation-research A lot of the recent talk is prompted by this 2023 study, which argues that despite the world having many more researchers than ever before (behold population growth) and more global investment in research, somehow "disruptive" innovations are coming less often, or are fewer and farther between these days. (Whether this is an accurate assessment is not a simple matter to resolve; more on this below.) There is a whole tech bro culture that buys into this, however. For example, see this interview from last week in the New York Times with Peter Thiel, which points out that Thiel has been complaining about this for a decade and a half. On some level, I get it emotionally. The unbounded future spun in a lot of science fiction seems very far away. Where is my flying car? Where is my jet pack? Where is my moon base? Where are my fusion power plants, my antigravity machine, my tractor beams, my faster-than-light drive? Why does the world today somehow not seem that different than the world of 1985, while the world of 1985 seems very different than that of 1945? Some of the folks that buy into this think that science is deeply broken somehow - that we've screwed something up, because we are not getting the future they think we were "promised". Some of these people have this as an internal justification underpinning the dismantling of the NSF, the NIH, basically a huge swath of the research ecosystem in the US. These same people would likely say that I am part of the problem, and that I can't be objective about this because the whole research ecosystem as it currently exists is a groupthink self-reinforcing spiral of mediocrity. Science and engineering are inherently human ventures, and I think a lot of these concerns have an emotional component. My take at the moment is this: Genuinely transformational breakthroughs are rare. They often require a combination of novel insights, previously unavailable technological capabilities, and luck. They don't come on a schedule. There is no hard and fast rule that guarantees continuous exponential technological progress. Indeed, in real life, exponential growth regimes never last. The 19th and 20th centuries were special. If we think of research as a quest for understanding, it's inherently hierarchal. Civilizational collapses aside, you can only discover how electricity works once. You can only discover the germ theory of disease, the nature of the immune system, and vaccination once (though in the US we appear to be trying really hard to test that by forgetting everything). You can only discover quantum mechanics once, and doing so doesn't imply that there will be an ongoing (infinite?) chain of discoveries of similar magnitude. People are bad at accurately perceiving rare events and their consequences, just like people have a serious problem evaluating risk or telling the difference between correlation and causation. We can't always recognize breakthroughs when they happen. Sure, I don't have a flying car. I do have a device in my pocket that weighs only a few ounces, gives me near-instantaneous access to the sum total of human knowledge, let's me video call people around the world, can monitor aspects of my fitness, and makes it possible for me to watch sweet videos about dogs. The argument that we don't have transformative, enormously disruptive breakthroughs as often as we used to or as often as we "should" is in my view based quite a bit on perception. Personally, I think we still have a lot more to learn about the natural world. AI tools will undoubtedly be helpful in making progress in many areas, but I think it is definitely premature to argue that the vast majority of future advances will come from artificial superintelligences and thus we can go ahead and abandon the strategies that got us the remarkable achievements of the last few decades. I think some of the loudest complainers (Thiel, for example) about perceived slowing advancement are software people. People who come from the software development world don't always appreciate that physical infrastructure and understanding are hard, and that there are not always clever or even brute-force ways to get to an end goal. Solving foundational problems in molecular biology or quantum information hardware or photonics or materials is not the same as software development. (The tech folks generally know this on an intellectual level, but I don't think all of them really understand it in their guts. That's why so many of them seem to ignore real world physical constraints when talking about AI.). Trying to apply software development inspired approaches to science and engineering research isn't bad as a component of a many-pronged strategy, but alone it may not give the desired results - as warned in part by this piece in Science this week. More frequent breakthroughs in our understanding and capabilities would be wonderful. I don't think dynamiting the US research ecosystem is the way to get us there, and hoping that we can dismantle everything because AI will somehow herald a new golden age seems premature at best.
Humans are dramatically changing the environment of the Earth in many ways. Only about 23% of the land surface (excluding Antarctica) is considered to be “wilderness”, and this is rapidly decreasing. What wilderness is left is also mostly managed conservation areas. Meanwhile, about 3% of the surface is considered urban. I could not find a […] The post Animals Adapting to Cities first appeared on NeuroLogica Blog.
The basis for much of modern electronics is a set of silicon technologies called CMOS, which stands for complementary metal oxide semiconductor devices and processes. "Complementary" means using semiconductors (typically silicon) that is locally chemically doped so that you can have both n-type (carriers are negatively charged electrons in the conduction band) and p-type (carriers are positively charged holes in the valence band) material on the same substrate. With field-effect transistors (using oxide gate dielectrics), you can make very compact, comparatively low power devices like inverters and logic gates. There are multiple different approaches to try to implement quantum information processing in solid state platforms, with the idea that the scaling lessons of microelectronics (in terms of device density and reliability) can be applied. I think that essentially all of these avenues require cryogenic operating conditions; all superconducting qubits need ultracold conditions for both superconductivity and to minimize extraneous quasiparticles and other decoherence sources. Semiconductor-based quantum dots (Intel's favorite) similarly need thermal perturbations and decoherence to be minimized. The wealth of solid state quantum computing research is the driver for the historically enormous (to me, anyway) growth of dilution refrigerator manufacturing (see my last point here). So you eventually want to have thousands of error-corrected logical qubits at sub-Kelvin temperatures, which may involve millions of physical qubits at sub-Kelvin temperatures, all of which need to be controlled. Despite the absolute experimental fearlessness of people like John Martinis, you are not going to get this to work by running a million wires from room temperature into your dil fridge. Fig. 1 from here. The alternative people in this area have converged upon is to create serious CMOS control circuitry that can work at 4 K or below, so that a lot of the wiring does not need to go from the qubits all the way to room temperature. The materials and device engineering challenges in doing this are substantial! Power dissipation really needs to be minimized, and material properties to work at cryogenic conditions are not the same as those optimized for room temperature. There have been major advances in this - examples include Google in 2019, Intel in 2021, IBM in 2024, and this week, folks at the University of New South Wales supported by Microsoft. In this most recent work, the aspect that I find most impressive is that the CMOS electronics are essentially a serious logic-based control board operating at milliKelvin temperatures right next to the chip with the qubits (in this case, spins-in-quantum-dots). I'm rather blown away that this works and with sufficiently low power dissipation that the fridge is happy. This is very impressive, and there is likely a very serious future in store for cryogenic CMOS.
The Trump administration is outwardly hostile to clean energy sourced from solar and wind. But thanks to close ties to the fossil fuel industry and new technological breakthroughs, U.S. geothermal power may survive the GOP assaults on support for renewables and even thrive. Read more on E360 →