More from nanoscale views
I participated in a program about 15 years ago that looked at science and technology challenges faced by a subset of the US government. I came away thinking that such problems fall into three broad categories. Actual science and engineering challenges, which require foundational research and creativity to solve. Technology that may be fervently desired but is incompatible with the laws of nature, economic reality, or both. Alleged science and engineering problems that are really human/sociology issues. Part of science and engineering education and training is giving people the skills to recognize which problems belong to which categories. Confusing these can strongly shape the perception of whether science and engineering research is making progress. There has been a lot of discussion in the last few years about whether scientific progress (however that is measured) has slowed down or stagnated. For example, see here: https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/ https://news.uchicago.edu/scientific-progress-slowing-james-evans https://www.forbes.com/sites/roberthart/2023/01/04/where-are-all-the-scientific-breakthroughs-forget-ai-nuclear-fusion-and-mrna-vaccines-advances-in-science-and-tech-have-slowed-major-study-says/ https://theweek.com/science/world-losing-scientific-innovation-research A lot of the recent talk is prompted by this 2023 study, which argues that despite the world having many more researchers than ever before (behold population growth) and more global investment in research, somehow "disruptive" innovations are coming less often, or are fewer and farther between these days. (Whether this is an accurate assessment is not a simple matter to resolve; more on this below.) There is a whole tech bro culture that buys into this, however. For example, see this interview from last week in the New York Times with Peter Thiel, which points out that Thiel has been complaining about this for a decade and a half. On some level, I get it emotionally. The unbounded future spun in a lot of science fiction seems very far away. Where is my flying car? Where is my jet pack? Where is my moon base? Where are my fusion power plants, my antigravity machine, my tractor beams, my faster-than-light drive? Why does the world today somehow not seem that different than the world of 1985, while the world of 1985 seems very different than that of 1945? Some of the folks that buy into this think that science is deeply broken somehow - that we've screwed something up, because we are not getting the future they think we were "promised". Some of these people have this as an internal justification underpinning the dismantling of the NSF, the NIH, basically a huge swath of the research ecosystem in the US. These same people would likely say that I am part of the problem, and that I can't be objective about this because the whole research ecosystem as it currently exists is a groupthink self-reinforcing spiral of mediocrity. Science and engineering are inherently human ventures, and I think a lot of these concerns have an emotional component. My take at the moment is this: Genuinely transformational breakthroughs are rare. They often require a combination of novel insights, previously unavailable technological capabilities, and luck. They don't come on a schedule. There is no hard and fast rule that guarantees continuous exponential technological progress. Indeed, in real life, exponential growth regimes never last. The 19th and 20th centuries were special. If we think of research as a quest for understanding, it's inherently hierarchal. Civilizational collapses aside, you can only discover how electricity works once. You can only discover the germ theory of disease, the nature of the immune system, and vaccination once (though in the US we appear to be trying really hard to test that by forgetting everything). You can only discover quantum mechanics once, and doing so doesn't imply that there will be an ongoing (infinite?) chain of discoveries of similar magnitude. People are bad at accurately perceiving rare events and their consequences, just like people have a serious problem evaluating risk or telling the difference between correlation and causation. We can't always recognize breakthroughs when they happen. Sure, I don't have a flying car. I do have a device in my pocket that weighs only a few ounces, gives me near-instantaneous access to the sum total of human knowledge, let's me video call people around the world, can monitor aspects of my fitness, and makes it possible for me to watch sweet videos about dogs. The argument that we don't have transformative, enormously disruptive breakthroughs as often as we used to or as often as we "should" is in my view based quite a bit on perception. Personally, I think we still have a lot more to learn about the natural world. AI tools will undoubtedly be helpful in making progress in many areas, but I think it is definitely premature to argue that the vast majority of future advances will come from artificial superintelligences and thus we can go ahead and abandon the strategies that got us the remarkable achievements of the last few decades. I think some of the loudest complainers (Thiel, for example) about perceived slowing advancement are software people. People who come from the software development world don't always appreciate that physical infrastructure and understanding are hard, and that there are not always clever or even brute-force ways to get to an end goal. Solving foundational problems in molecular biology or quantum information hardware or photonics or materials is not the same as software development. (The tech folks generally know this on an intellectual level, but I don't think all of them really understand it in their guts. That's why so many of them seem to ignore real world physical constraints when talking about AI.). Trying to apply software development inspired approaches to science and engineering research isn't bad as a component of a many-pronged strategy, but alone it may not give the desired results - as warned in part by this piece in Science this week. More frequent breakthroughs in our understanding and capabilities would be wonderful. I don't think dynamiting the US research ecosystem is the way to get us there, and hoping that we can dismantle everything because AI will somehow herald a new golden age seems premature at best.
The basis for much of modern electronics is a set of silicon technologies called CMOS, which stands for complementary metal oxide semiconductor devices and processes. "Complementary" means using semiconductors (typically silicon) that is locally chemically doped so that you can have both n-type (carriers are negatively charged electrons in the conduction band) and p-type (carriers are positively charged holes in the valence band) material on the same substrate. With field-effect transistors (using oxide gate dielectrics), you can make very compact, comparatively low power devices like inverters and logic gates. There are multiple different approaches to try to implement quantum information processing in solid state platforms, with the idea that the scaling lessons of microelectronics (in terms of device density and reliability) can be applied. I think that essentially all of these avenues require cryogenic operating conditions; all superconducting qubits need ultracold conditions for both superconductivity and to minimize extraneous quasiparticles and other decoherence sources. Semiconductor-based quantum dots (Intel's favorite) similarly need thermal perturbations and decoherence to be minimized. The wealth of solid state quantum computing research is the driver for the historically enormous (to me, anyway) growth of dilution refrigerator manufacturing (see my last point here). So you eventually want to have thousands of error-corrected logical qubits at sub-Kelvin temperatures, which may involve millions of physical qubits at sub-Kelvin temperatures, all of which need to be controlled. Despite the absolute experimental fearlessness of people like John Martinis, you are not going to get this to work by running a million wires from room temperature into your dil fridge. Fig. 1 from here. The alternative people in this area have converged upon is to create serious CMOS control circuitry that can work at 4 K or below, so that a lot of the wiring does not need to go from the qubits all the way to room temperature. The materials and device engineering challenges in doing this are substantial! Power dissipation really needs to be minimized, and material properties to work at cryogenic conditions are not the same as those optimized for room temperature. There have been major advances in this - examples include Google in 2019, Intel in 2021, IBM in 2024, and this week, folks at the University of New South Wales supported by Microsoft. In this most recent work, the aspect that I find most impressive is that the CMOS electronics are essentially a serious logic-based control board operating at milliKelvin temperatures right next to the chip with the qubits (in this case, spins-in-quantum-dots). I'm rather blown away that this works and with sufficiently low power dissipation that the fridge is happy. This is very impressive, and there is likely a very serious future in store for cryogenic CMOS.
As usual, I hope to write more about particular physics topics soon, but in the meantime I wanted to share a sampling of news items: First, it's a pleasure to see new long-form writing about condensed matter subjects, in an era where science blogging has unquestionably shrunk compared to its heyday. The new Quantum Matters substack by Justin Wilson (and William Shelton) looks like it will be a fun place to visit often. Similar in spirit, I've also just learned about the Knowmads podcast (here on youtube), put out by Prachi Garella and Bhavay Tyagi, two doctoral students at the University of Houston. Fun Interviews with interesting scientists about their science and how they get it done. There have been some additional news bits relevant to the present research funding/university-govt relations mess. Earlier this week, 200 business leaders published an open letter about how the slashing support for university research will seriously harm US economic competitiveness. More of this, please. I continue to be surprised by how quiet technology-related, pharma, and finance companies are being, at least in public. Crushing US science and engineering university research will lead to serious personnel and IP shortages down the line, definitely poor for US standing. Again, now is the time to push back on legislators about cuts mooted in the presidential budget request. The would-be 15% indirect cost rate at NSF has been found to be illegal, in a summary court judgment released yesterday. (Brief article here, pdf of the ruling here.) Along these lines, there are continued efforts for proposals about how to reform/alter indirect cost rates in a far less draconian manner. These are backed by collective organizations like the AAU and COGR. If you're interested in this, please go here, read the ideas, and give some feedback. (Note for future reference: the Joint Associations Group (JAG) may want to re-think their acronym. In local slang where I grew up, the word "jag" does not have pleasant connotations.) The punitive attempt to prevent Harvard from taking international students has also been stopped for now in the courts.
Again, as a distraction from persistently concerning news, here is a science mystery of which I was previously unaware. The role of approximations in physics is something that very often comes as a shock to new students. There is this cultural expectation out there that because physics is all about quantitative understanding of physical phenomena, and the typical way we teach math and science in K12 education, we should be able to get exact solutions to many of our attempts to model nature mathematically. In practice, though, constructing physics theories is almost always about approximations, either in the formulation of the model itself (e.g. let's consider the motion of an electron about the proton in the hydrogen atom by treating the proton as infinitely massive and of negligible size) or in solving the mathematics (e.g., we can't write an exact analytical solution of the problem when including relativity, but we can do an order-by-order expansion in powers of \(p/mc\)). Theorists have a very clear understanding of what means to say that an approximation is "well controlled" - you know on both physical and mathematical grounds that a series expansion actually converges, for example. Some problems are simpler than others, just by virtue of having a very limited number of particles and degrees of freedom, and some problems also lend themselves to high precision measurements. The hydrogen atom problem is an example of both features. Just two spin-1/2 particles (if we approximate the proton as a lumped object) and readily accessible to optical spectroscopy to measure the energy levels for comparison with theory. We can do perturbative treatments to account for other effects of relativity, spin-orbit coupling, interactions with nuclear spin, and quantum electrodynamic corrections (here and here). A hallmark of atomic physics is the remarkable precision and accuracy of these calculations when compared with experiment. (The \(g\)-factor of the electron is experimentally known to a part in \(10^{10}\) and matches calculations out to fifth order in \(\alpha = e^2/(4 \pi \epsilon_{0}\hbar c)\).). The helium atom is a bit more complicated, having two electrons and a more complicated nucleus, but over the last hundred years we've learned a lot about how to do both calculations and spectroscopy. As explained here, there is a problem. It is possible to put helium into an excited metastable triplet state with one electron in the \(1s\) orbital, the other electron in the \(2s\) orbital, and their spins in a triplet configuration. Then one can measure the ionization energy of that system - the minimum energy required to kick an electron out of the atom and off to infinity. This energy can be calculated to seventh order in \(\alpha\), and the theorists think that they're accounting for everything, including the finite (but tiny) size of the nucleus. The issue: The calculation and the experiment differ by about 2 nano-eV. That may not sound like a big deal, but the experimental uncertainty is supposed to be a little over 0.08 nano-eV, and the uncertainty in the calculation is estimated to be 0.4 nano-eV. This works out to something like a 9\(\sigma\) discrepancy. Most recently, a quantitatively very similar discrepancy shows up in the case of measurements performed in 3He rather than 4He. This is pretty weird. Historically, it would seem that the most likely answer is a problem with either the measurements (though that seems doubtful, since precision spectroscopy is such a well-developed set of techniques), the calculation (though that also seems weird, since the relevant physics seems well known), or both. The exciting possibility is that somehow there is new physics at work that we don't understand, but that's a long shot. Still, something fun to consider (as my colleagues (and I) try to push back on the dismantling of US scientific research.)
More in science
Image generators are designed to mimic their training data, so where does their apparent creativity come from? A recent study suggests that it’s an inevitable by-product of their architecture. The post Researchers Uncover Hidden Ingredients Behind AI Creativity first appeared on Quanta Magazine
I participated in a program about 15 years ago that looked at science and technology challenges faced by a subset of the US government. I came away thinking that such problems fall into three broad categories. Actual science and engineering challenges, which require foundational research and creativity to solve. Technology that may be fervently desired but is incompatible with the laws of nature, economic reality, or both. Alleged science and engineering problems that are really human/sociology issues. Part of science and engineering education and training is giving people the skills to recognize which problems belong to which categories. Confusing these can strongly shape the perception of whether science and engineering research is making progress. There has been a lot of discussion in the last few years about whether scientific progress (however that is measured) has slowed down or stagnated. For example, see here: https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/ https://news.uchicago.edu/scientific-progress-slowing-james-evans https://www.forbes.com/sites/roberthart/2023/01/04/where-are-all-the-scientific-breakthroughs-forget-ai-nuclear-fusion-and-mrna-vaccines-advances-in-science-and-tech-have-slowed-major-study-says/ https://theweek.com/science/world-losing-scientific-innovation-research A lot of the recent talk is prompted by this 2023 study, which argues that despite the world having many more researchers than ever before (behold population growth) and more global investment in research, somehow "disruptive" innovations are coming less often, or are fewer and farther between these days. (Whether this is an accurate assessment is not a simple matter to resolve; more on this below.) There is a whole tech bro culture that buys into this, however. For example, see this interview from last week in the New York Times with Peter Thiel, which points out that Thiel has been complaining about this for a decade and a half. On some level, I get it emotionally. The unbounded future spun in a lot of science fiction seems very far away. Where is my flying car? Where is my jet pack? Where is my moon base? Where are my fusion power plants, my antigravity machine, my tractor beams, my faster-than-light drive? Why does the world today somehow not seem that different than the world of 1985, while the world of 1985 seems very different than that of 1945? Some of the folks that buy into this think that science is deeply broken somehow - that we've screwed something up, because we are not getting the future they think we were "promised". Some of these people have this as an internal justification underpinning the dismantling of the NSF, the NIH, basically a huge swath of the research ecosystem in the US. These same people would likely say that I am part of the problem, and that I can't be objective about this because the whole research ecosystem as it currently exists is a groupthink self-reinforcing spiral of mediocrity. Science and engineering are inherently human ventures, and I think a lot of these concerns have an emotional component. My take at the moment is this: Genuinely transformational breakthroughs are rare. They often require a combination of novel insights, previously unavailable technological capabilities, and luck. They don't come on a schedule. There is no hard and fast rule that guarantees continuous exponential technological progress. Indeed, in real life, exponential growth regimes never last. The 19th and 20th centuries were special. If we think of research as a quest for understanding, it's inherently hierarchal. Civilizational collapses aside, you can only discover how electricity works once. You can only discover the germ theory of disease, the nature of the immune system, and vaccination once (though in the US we appear to be trying really hard to test that by forgetting everything). You can only discover quantum mechanics once, and doing so doesn't imply that there will be an ongoing (infinite?) chain of discoveries of similar magnitude. People are bad at accurately perceiving rare events and their consequences, just like people have a serious problem evaluating risk or telling the difference between correlation and causation. We can't always recognize breakthroughs when they happen. Sure, I don't have a flying car. I do have a device in my pocket that weighs only a few ounces, gives me near-instantaneous access to the sum total of human knowledge, let's me video call people around the world, can monitor aspects of my fitness, and makes it possible for me to watch sweet videos about dogs. The argument that we don't have transformative, enormously disruptive breakthroughs as often as we used to or as often as we "should" is in my view based quite a bit on perception. Personally, I think we still have a lot more to learn about the natural world. AI tools will undoubtedly be helpful in making progress in many areas, but I think it is definitely premature to argue that the vast majority of future advances will come from artificial superintelligences and thus we can go ahead and abandon the strategies that got us the remarkable achievements of the last few decades. I think some of the loudest complainers (Thiel, for example) about perceived slowing advancement are software people. People who come from the software development world don't always appreciate that physical infrastructure and understanding are hard, and that there are not always clever or even brute-force ways to get to an end goal. Solving foundational problems in molecular biology or quantum information hardware or photonics or materials is not the same as software development. (The tech folks generally know this on an intellectual level, but I don't think all of them really understand it in their guts. That's why so many of them seem to ignore real world physical constraints when talking about AI.). Trying to apply software development inspired approaches to science and engineering research isn't bad as a component of a many-pronged strategy, but alone it may not give the desired results - as warned in part by this piece in Science this week. More frequent breakthroughs in our understanding and capabilities would be wonderful. I don't think dynamiting the US research ecosystem is the way to get us there, and hoping that we can dismantle everything because AI will somehow herald a new golden age seems premature at best.
Humans are dramatically changing the environment of the Earth in many ways. Only about 23% of the land surface (excluding Antarctica) is considered to be “wilderness”, and this is rapidly decreasing. What wilderness is left is also mostly managed conservation areas. Meanwhile, about 3% of the surface is considered urban. I could not find a […] The post Animals Adapting to Cities first appeared on NeuroLogica Blog.
The basis for much of modern electronics is a set of silicon technologies called CMOS, which stands for complementary metal oxide semiconductor devices and processes. "Complementary" means using semiconductors (typically silicon) that is locally chemically doped so that you can have both n-type (carriers are negatively charged electrons in the conduction band) and p-type (carriers are positively charged holes in the valence band) material on the same substrate. With field-effect transistors (using oxide gate dielectrics), you can make very compact, comparatively low power devices like inverters and logic gates. There are multiple different approaches to try to implement quantum information processing in solid state platforms, with the idea that the scaling lessons of microelectronics (in terms of device density and reliability) can be applied. I think that essentially all of these avenues require cryogenic operating conditions; all superconducting qubits need ultracold conditions for both superconductivity and to minimize extraneous quasiparticles and other decoherence sources. Semiconductor-based quantum dots (Intel's favorite) similarly need thermal perturbations and decoherence to be minimized. The wealth of solid state quantum computing research is the driver for the historically enormous (to me, anyway) growth of dilution refrigerator manufacturing (see my last point here). So you eventually want to have thousands of error-corrected logical qubits at sub-Kelvin temperatures, which may involve millions of physical qubits at sub-Kelvin temperatures, all of which need to be controlled. Despite the absolute experimental fearlessness of people like John Martinis, you are not going to get this to work by running a million wires from room temperature into your dil fridge. Fig. 1 from here. The alternative people in this area have converged upon is to create serious CMOS control circuitry that can work at 4 K or below, so that a lot of the wiring does not need to go from the qubits all the way to room temperature. The materials and device engineering challenges in doing this are substantial! Power dissipation really needs to be minimized, and material properties to work at cryogenic conditions are not the same as those optimized for room temperature. There have been major advances in this - examples include Google in 2019, Intel in 2021, IBM in 2024, and this week, folks at the University of New South Wales supported by Microsoft. In this most recent work, the aspect that I find most impressive is that the CMOS electronics are essentially a serious logic-based control board operating at milliKelvin temperatures right next to the chip with the qubits (in this case, spins-in-quantum-dots). I'm rather blown away that this works and with sufficiently low power dissipation that the fridge is happy. This is very impressive, and there is likely a very serious future in store for cryogenic CMOS.
The Trump administration is outwardly hostile to clean energy sourced from solar and wind. But thanks to close ties to the fossil fuel industry and new technological breakthroughs, U.S. geothermal power may survive the GOP assaults on support for renewables and even thrive. Read more on E360 →