Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
2
Two very brief items of interest: This article is a nice popular discussion of the history of fiber optics and the remarkable progress it's made for telecommunications.  If you're interested in a more expansive but very accessible take on this, I highly recommend City of Light by Jeff Hecht (not to be confused with Eugene Hecht, author of the famous optics textbook). I stumbled upon an interesting effort by Yokogawa, the Japanese electronics manufacturer, to provide an alternative path for semiconductor device prototyping that they call minimal fab.  The idea is, instead of prototyping circuits on 200 mm wafers or larger (the industry standard for large scale production is 200 mm or 300 mm.  Efforts to go up to 450 mm wafers have been shelved for now.), there are times when it makes sense to work on 12.5 mm substrates.  Their setup uses maskless photolithography and is intended to be used without needing a cleanroom.  Admittedly, this limits it strongly in terms of device size to...
5 months ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from nanoscale views

What is "static electricity"/"contact electrification"/triboelectricity?

An early physics demonstration that many of us see in elementary school is that of static electricity:  an electrical insulator like a wool cloth or animal fur is rubbed on a glass or plastic rod, and suddenly the rod can pick up pieces of styrofoam or little bits of paper.  Alternately, a rubber balloon is rubbed against a kid's hair, and afterward the balloon is able to stick to a wall with sufficient force that static friction keeps the balloon from sliding down the surface.  The physics here is that when materials are rubbed together, there can be a net transfer of electrical charge from one to the other, a phenomenon called triboelectricity.  The electrostatic attraction between net charge on the balloon and the polarizable surface of the wall is enough to hold up the balloon.   Balloons electrostatically clinging to a wall, from here. The big mysteries are, how and why do charges transfer between materials when they are rubbed together?  As I wrote about once before, this is still not understood, despite more than 2500 years of observations.  The electrostatic potentials that can be built up through triboelectricity are not small.  They can be tens of kV, enough to cause electrons accelerating across those potentials to emit x-rays when they smack into the positively charged surface.  Whatever is going on, it's a way to effectively concentrate the energy from mechanical work into displacing charges.  This is how Wimshurst machines and Van de Graaff generators work, even though we don't understand the microscopic physics of the charge generation and separation. There are disagreements to this day about the mechanisms at work in triboelectricity, including the role of adsorbates, surface chemistry, whether the charges transferred are electrons or ions, etc.  From how electronic charge transfer works between metals, or between metals and semiconductors, it's not crazy to imagine that somehow this should all come down to work functions or the equivalent.  Depending on the composition and structure of materials, the electrons in there can be bound more tightly (energetically deeper compared to the energy of an electron far away, also called "the vacuum" level) or more loosely (energetically shallower, closer to the energy of a free electron).  It's credible that bringing two such materials in contact could lead to electrons "falling down hill" from the more loosely-binding material into the more tightly binding one.   That clearly is not the whole story, though, or this would've been figured out long ago. This week, a new paper revealed an interesting wrinkle.  The net preference for picking up or losing charge seems to depend very clearly on the history of repeated contacts.  The authors used PDMS silicone rubber, and they find that repeated contacting can deterministically bake in a tendency for charge to flow one direction.  Using various surface spectroscopy methods, they find no obvious differences at the PDMS surface before/after the contacting procedures, but charge transfer is affected.   My sneaking suspicion is that adsorbates will turn out to play a huge role in all of this.  This may be one of those issues like friction (see here too), where there is a general emergent phenomenon (net charge transfer) that can take place via multiple different underlying pathways.  Experiments in ultrahigh vacuum with ultraclean surfaces will undoubtedly show quantitatively different results than experiments in ambient conditions, but they may both show triboelectricity.

a week ago 2 votes
The National Science Foundation - this is not business as usual

The National Science Foundation was created 75 years ago, at the behest of Vannevar Bush, who put together the famed study, Science, The Endless Frontier, in 1945.  The NSF has played a critical role in a huge amount of science and engineering research since its inception, including advanced development of the laser, the page rank algorithm that ended up in google, and too many other contributions to list.   The NSF funds university research as well as some national facilities.  Organizationally, the NSF is an independent agency, meaning that it doesn’t reside under a particular cabinet secretary, though its Director is a presidential appointee who is confirmed by the US Senate.  The NSF comprises a number of directorates (most relevant for readers of this blog are probably Mathematical and Physical Sciences; Engineering;  and STEM Education, though there are several others).  Within the directorates are divisions (for example, MPS → Division of Materials Research; Division of Chemistry; Division of Physics; Division of Mathematics etc.).   Within each division are a variety of programs, spanning from individual investigator grants to medium to large center proposals, to group training grants, to individual graduate and postdoctoral fellowships.  Each program is administered by one or more program officers who are either scientists who have become civil servants, or "rotators", academics who take a leave of absence from their university positions to serve at the NSF for some number of years.  The NSF is the only agency whose mission historically has explicitly included science education.  The NSF's budget has been about $9B/yr (though until very recently there was supposedly bipartisan support for large increases), and 94% of its funds are spent on research, education, and related activities.  NSF funds more than 1/4 of all basic research done at universities in the US, and it also funds tech development, like small business innovation grants. The NSF, more than any other agency that funds physical science and engineering research, relies on peer review.  Grants are reviewed by individual reviewers and/or panels.  Compared to other agencies, the influence of program officers in the review process is minimal.  If a grant doesn't excite the reviewers, it won't get funded.  This has its pluses and minuses, but it's less of a personal networking process than other agencies.  The success rate for many NSF programs is low, averaging around 25% in DMR, and 15% or so for graduate fellowships.  Every NSF program officer with whom I've ever interacted has been dedicated and professional.   Well, yesterday the NSF laid off 11% of its workforce.  I had an exchange last night with a long-time NSF program director, who gave permission for me to share the gist, suitably anonymized.  (I also corrected typos.)  This person says that they want people to be aware of what's going on.  They say that NSF leadership is apparently helping with layoffs, and that "permanent Program Directors (feds such as myself) will be undergoing RIF or Reduction In Force process within the next month or so. So far, through buyout and firing today we lost about 16% of the workforce, and RIF is expected to bring it up to 50%."  When I asked further, this person said this was "fairly certain".   They went on:  "Another danger is budget.  We do no know what happens after the current CR [continuing resolution] ends March 14.  A long shutdown or another CR are possible.  For FY26 we are told about plans to reduce the NSF budget by 50%-75% - such reduction will mean no new awards for at least a year, elimination of divisions, merging of programs.  Individual researchers and professional societies can help by raising the voice of objection.  But realistically, we need to win the midterms to start real change.  For now we are losing this battle.  I can only promise you that NSF PDs are united as never before in our dedication to serve our communities of reesarchers and educators.  We will continue to do so as long as we are here."  On a related note, here is a thread by a just laid off NSF program officer.  Note that congress has historically ignored presidential budget requests to cut NSF, but it's not at all clear that this can be relied upon now.   Voluntarily hobbling the NSF is, in my view, a terrible mistake that will take decades to fix.  The argument that this is a fiscally responsible thing to do is weak.  The total federal budget expenditures in FY24 was $6.75T.  The NSF budget was $9B, or 0.13% of the total.  The secretary of defense today said that their plan is to cut 8% of the DOD budget every year for the next several years.  That's a reduction of 9 NSF budgets per year.   I fully recognize that many other things are going on in the world right now, and many agencies are under similar pressures, but I wanted to highlight the NSF in particular.  Acting like this is business as usual, the kind of thing that happens whenever there is a change of administration, is disingenuous.

a week ago 2 votes
What are parastatistics?

While I could certainly write more about what is going on in the US these days (ahh, trying to dismantle organizations you don't understand), instead I want to briefly highlight a very exciting result from my colleagues, published in Nature last month.  (I almost titled this post "Lies, Damn Lies, and (para)Statistics", but that sounds like I don't like the paper.) When we teach students about the properties of quantum objects (and about thermodynamics), we often talk about the "statistics" obeyed by indistinguishable particles.  I've written about aspects of this before.  "Statistics" in this sense means, what happens mathematically to the multiparticle quantum state \(|\Psi\rangle\) when two particles are swapped.  If we use the label \(\mathbf{1}\) to mean the set of quantum numbers associated with particle 1, etc., then the question is, how are \(|\Psi(\mathbf{1},\mathbf{2})\rangle\) and \(|\Psi(\mathbf{2},\mathbf{1})\rangle\) related to each other.  We know that probabilities have to be conserved, so \(\langle \Psi(\mathbf{1},\mathbf{2}) | \Psi(\mathbf{1},\mathbf{2})\rangle = \langle \Psi(\mathbf{2},\mathbf{1}) | \Psi(\mathbf{2},\mathbf{1})\rangle\).    The usual situation is to assume \(|\Psi(\mathbf{2},\mathbf{1})\rangle\ = c |\Psi(\mathbf{1},\mathbf{2})\rangle\), where \(c\) is a complex number of magnitude 1.  If \(c = 1\), which is sort of the "common sense" expectation from classical physics, the particles are bosons, obeying Bose-Einstein stastistics.   If \(c = -1\), the particles are fermions and obey Fermi-Dirac statistics.  In principle, one could have \(c = \exp(i\alpha)\), where \(\alpha\) is some phase angle.  Particles in that general case are called anyons, and I wrote about them here.  Low energy excitations of electrons (fermions) confined in 2D in the presence of a magnetic field can act like anyons, but it seems there can't be anyons in higher dimensions.  Being imprecise, when particles are "dilute" -- "far" from each other in terms of position and momentum -- we typically don't really need to worry much about what kind of quantum statistics govern the particles.  The distribution function - the average occupancy of a typical single-particle quantum state (labeled by a coordinate \(\mathbf{r}\), a wavevector \(\mathbf{k}\), and a spin \(\mathbf{\sigma}\) as one possibility) - is much less than 1.  When particles are much more dense, though, the quantum statistics matter enormously.  At low temperatures, bosons can all pile into the (single-particle, in the absence of interactions) ground state - that's Bose-Einstein condensation.   In contrast, fermions have to stack up into higher energy states, since FD statistics imply that no two indistinguishable fermions can be in the same state - this is the Pauli Exclusion Principle, and it's basically why solids are solid.  If a gas of particles is at a temperature \(T\) and a chemical potential \(\mu\), then the distribution function and a function of energy \(\epsilon\) for bosons or fermions is given by \(f(\epsilon,\mu,T) = 1/ (\exp((\epsilon-\mu)/k_{\mathrm{B}}T) \pm 1 )\), where the \(+\) sign is the fermion case and the \(-\) sign is the boson case.   In the paper at hand, the authors take on parastatistics, the question of what happens if, besides spin, there are other "internal degrees of freedom" that are attached to particles described by additional indices that obey different algebras.  As they point out, this is not a new idea, but what they have done here is show that it is possible to have mathematically consistent versions of this that do not trivially reduce to fermions and bosons and can survive in, say, 3 spatial dimensions.  They argue that low energy excitations (quasiparticles) of some quantum spin systems can have these properties.  That's cool but not necessarily surprising - there are quasiparticles in condensed matter systems that are argued to obey a variety of exotic relations originally proposed in the world of high energy theory (Weyl fermions, Majorana fermions, massless Dirac fermions).  They also put forward the possibility that elementary particles could obey these statistics as well.  (Ideas transferring over from condensed matter or AMO physics to high energy is also not a new thing; see the Anderson-Higgs mechanism, and the concept of unparticles, which has connections to condensed matter systems where electronic quasiparticles may not be well defined.) Fig. 1 from this paper, showing distribution functions for fermions, bosons,  and more exotic systems studied in the paper. Interestingly, the authors work out what the distribution function can look like for these exotic particles, as shown here (fig 1 from the paper).  The left panel shows how many particles can be in a single-particle spatial state for fermions (zero or one), bosons (up to \(\infty\)), and funky parastatistics-obeying particles of different types.  The right panel shows the distribution functions for these cases.  I think this is very cool.  When I've taught statistical physics to undergrads, I've told the students that no one has written down a general distribution function for systems like this.  Guess I'll have to revise my statements on this!

2 weeks ago 2 votes
Indirect costs + potential unintended consequences

It's been another exciting week where I feel compelled to write about the practice of university-based research in the US.  I've written about "indirect costs" before, but it's been a while.  I will try to get readers caught up on the basics of the university research ecosystem in the US, what indirect costs are, the latest (ahh, the classic Friday evening news dump) from NIH, and what might happen.  (A note up front:  there are federal laws regulating indirect costs, so the move by NIH will very likely face immediate legal challenges.  Update:  And here come the lawsuits.  Update 2:  Here is a useful explanatory video.)  Update 3:  This post is now closed (7:53pm CST 13 Feb).  When we get to the "bulldozer going to pile you lot onto the trash" level of discourse, there is no more useful discussion happening. How does university-based sponsored technical research work in the US?  Since WWII, but particularly since the 1960s, many US universities conduct a lot of science and engineering research sponsored by US government agencies, foundations, and industry.  By "sponsored", I mean there is a grant or contract between a sponsor and the university that sends funds to the university in exchange for research to be conducted by one or more faculty principal investigators, doctoral students, postdocs, undergrads, staff scientists, etc.  When a PI writes a proposal to a sponsor, a budget is almost always required that spells out how much funding is being requested and how it will be spent.  For example, a proposal could say, we are going to study superconductivity in 2D materials, and the budget (which comes with a budget justification) says, to do this, I need $37000 per year to pay a graduate research assistant for 12 months, plus $12000 per year for graduate student tuition, plus $8000 in for the first year for a special amplifier, plus $10000 to cover materials, supplies, and equipment usage fees.  Those are called direct costs.   In addition, the budget asks for funds to cover indirect costs.  Indirect costs are meant to cover the facilities and administrative costs that the university will incur doing the research - that includes things like, maintaining the lab building, electricity, air conditioning, IT infrastructure, research accountants to keep track of the expenses and generate financial reports, etc.  Indirect costs are computed as some percentage of some subset of the direct costs (e.g.,  there are no indirect costs charged on grad tuition or pieces of equipment more expensive than $5K).  Indirect cost rates have varied over the years but historically have been negotiated between universities and the federal government.  As I wrote eight years ago, "the magic (ahem) is all hidden away in OMB Circular A21 (wiki about it, pdf of the actual doc).  Universities periodically go through an elaborate negotiation process with the federal government (see here for a description of this regarding MIT), and determine an indirect cost rate for that university."  Rice's indirect cost rate is 56.5% for on-campus fed or industrial sponsored projects.  Off-campus rates are lower (if you're really doing the research at CERN, then logically your university doesn't need as much indirect).  Foundations historically try to negotiate lower indirect cost rates, often arguing that their resources are limited and paying for administration is not what their charters endorse.  The true effective indirect rate for universities is always lower than the stated number because of such negotiations. PIs are required to submit technical progress reports, and universities are required to submit detailed financial reports, to track these grants.   This basic framework has been in place for decades, and it has resulted in the growth of research universities, with enormous economic and societal benefit.  Especially as industrial long term research has waned in the US (another screed I have written before), the university research ecosystem has been hugely important in contributing to modern technological society.  We would not have the internet now, for example, if not for federally sponsored research. Is it ideal?  No.  Are there inefficiencies?  Sure.  Should the whole thing be burned down?  Not in my opinion, no. "All universities lose money doing research."  This is a quote from my colleague who was provost when I arrived at Rice, and was said to me tongue-in-cheek, but also with more than a grain of truth.  If you look at how much it really takes to run the research apparatus, the funds brought in via indirect costs do not cover those costs.  I have always said that this is a bit like Hollywood accounting - if research was a true financial disaster, universities wouldn't do it.  The fact is that research universities have been willing to subsidize the additional real indirect costs because having thriving research programs brings benefits that are not simple to quantify financially - reputation, star faculty, opportunities for their undergrads that would not exist in the absence of research, potential patent income and startup companies, etc. Reasonable people can disagree on what is the optimal percentage number for indirect costs.   It's worth noting that the indirect cost rate at Bell Labs back when I was there was something close to 100%.  Think about that.  In a globally elite industrial research environment, with business-level financial pressure to be frugal, the indirect rate was 100%.   The fact is, if indirect cost rates are set too low, universities really will be faced with existential choices about whether to continue to support sponsored research.  The overall benefits of having research programs will not outweigh the large financial costs of supporting this business. Congress has made these true indirect costs steadily higher.  Over the last decades, both because it is responsible stewardship and because it's good politics, Congress has passed laws requiring more and more oversight of research expenditures and security.  Compliance with these rules has meant that universities have had to hire more administrators - on financial accounting and reporting, research security, tech transfer and intellectual property, supervisory folks for animal- and human-based research, etc.  Agencies can impose their own requirements as well.  Some large center-type grants from NIH/HHS and DOD require preparation and submission of monthly financial reports.  What did NIH do yesterday?  NIH put out new guidance (linked above) setting their indirect cost rate to 15% effective this coming Monday.  This applies not just to new grants, but also to awards already underway.  There is also a not very veiled threat in there that says, we have chosen for now not to retroactively go back to the start of current awards and ask for funds (already spent) to be returned to us, but we think we would be justified in doing so.   The NIH twitter feed proudly says that this change will produce an immediate savings to US taxpayers of $4B.  What does this mean?  What are the intended and possible unintended consequences?  It seems very likely that other agencies will come under immense pressure to make similar changes.  If all agencies do so, and nothing else changes, this will mean tens of millions fewer dollars flowing to typical research universities every year.   If a university has $300M annually in federally sponsored research, then that would be generating under the old rules (assume 55% indirect rate) $194M of direct and $106M of indirect costs.  If the rate is dropped to 15% and the direct costs stay the same at $194M, then that would generate $29M of indirect costs, a net cut to the university of $77M per year. There will be legal challenges to all of this, I suspect.  The intended consequences are supposedly to save taxpayer dollars and force universities to streamline their administrative processes.  However, given that Congress and the agencies are unlikely to lessen their reporting and oversight requirements, it's very hard for me to see how there can be some radical reduction in accounting and compliance staffs.  There seems to be a sentiment that this will really teach those wealthy elite universities a lesson, that with their big endowments they should pick up more of the costs. One unintended consequence:  If this broadly goes through and sticks, universities will want to start making new direct costs.  For a grant like the one I described above, you could imagine asking for $1200 per year for electricity, $1000/yr for IT support, $3000/yr for lab space maintenance, etc.  This will create a ton of work for lawyers, as there will be a fight over what is or is not an allowable direct cost.  This will also create the need for even more accounting types to track all of this.  This is the exact opposite of "streamlined" administrative processes. A second unintended consequence:  Universities for whom doing research is financially a lot more of a marginal proposition would likely get out of those activities,  if they truly can't recover the costs of operating their offices of research.  This is the opposite of improving the situation and student opportunities at the less elite universities. From a purely real politik perspective that often appeals to legislators:  Everything that harms the US research enterprise effectively helps adversaries.  The US benefitted enormously after WWII by building a global premier research environment.  Risking that should not be done lightly. Don't panic.  There is nothing gained by freaking out.  Whatever happens, it will likely be a drawn out process.  It's best to be aware of what's happening, educated about what it means, and deliberate in formulating strategies that will preserve research excellence and capabilities. (So help me, I really want my next post to be about condensed matter physics or nanoscale science!)

3 weeks ago 2 votes
NSF targeted with mass layoffs, acc to Politico; huge cuts in president’s budget request

According to this article at politico, there was an all-hands meeting at NSF today (at least for the engineering directorate) where they were told that there will be staff layoffs of 25-50% over the next two months. This is an absolute catastrophe if it is accurately reported and comes to pass.  NSF is already understaffed.  This goes far beyond anything involving DEI, and is essentially a declaration that the US is planning to abrogate the federal role in supporting science and engineering research.   Moreover, I strongly suspect that if this conversation is being had at NSF, it is likely being had at DOE and NIH. I don't even know how to react to this, beyond encouraging my fellow US citizens to call their representatives and senators and make it clear that this would be an unmitigated disaster. Update: looks like the presidential budget request will be for a 2/3 cut to the NSF.  Congress often goes against such recommendations, but this is certainly an indicator of what the executive branch seems to want.

4 weeks ago 2 votes

More in science

Are AIs People?

Every year, AI models get better at thinking. Could they possibly be capable of feeling? And if they are, how would we know?

18 hours ago 3 votes
A New, Chemical View of Ecosystems

Rare and powerful compounds, known as keystone molecules, can build a web of invisible interactions among species. The post A New, Chemical View of Ecosystems first appeared on Quanta Magazine

3 hours ago 2 votes
All Dams Are Temporary

[Note that this article is a transcript of the video embedded above.] Lewis and Clark Lake, on the border between Nebraska and South Dakota, might not be a lake for much longer. Together with the dam that holds it back, the reservoir provides hydropower, flood control, and supports a robust recreational economy through fishing, boating, camping, birdwatching, hunting, swimming, and biking. All of that faces an existential threat from a seemingly innocuous menace: dirt. Around 5 million tons of it flows down this stretch of the Missouri River every year until it reaches the lake, where it falls out of suspension. Since the 1950s, when the dam was built, the sand and silt have built up a massive delta where the river comes in. The reservoir has already lost about 30 percent of its storage capacity, and one study estimated that, by 2045, it will be half full of sediment. On the surface, this seems like a silly problem, almost elementary. It’s just dirt! But I want to show you why it’s a slow-moving catastrophe with implications that span the globe. And I want you to think of a few solutions to it off the top of your head, because I think you’ll be surprised to learn why none of the ones we’ve come up with so far are easy. I’m Grady, and this is Practical Engineering. I want to clarify that the impacts dams have on sediment movement happen on both sides. Downstream, the impacts are mostly environmental. We think of rivers as carriers of water; it’s right there in the definition. But if you’ve ever seen a river that looks like chocolate milk after a storm, you already know that they are also major movers of sediment. And the natural flow of sediment has important functions in a river system. It transports nutrients throughout the watershed. It creates habitat in riverbeds for fish, amphibians, mammals, reptiles, birds, and a whole host of invertebrates. It fertilizes floodplains, stabilizes river banks, and creates deltas and beaches on the coastline that buffer against waves and storms. Robbing the supply of sediment from a river can completely alter the ecosystem downstream from a dam. But if a river is more than just a water carrier, a reservoir is more than just a water collector. And, of course, I built a model to show how this works. This is my acrylic flume. If you’re familiar with the channel, you’ve probably seen it in action before. I have it tilted up so we get two types of flow. On the right, we have a stream of fast-moving water to simulate a river, and on the left, I’ve built up a little dam. These stoplogs raise the level of the water, slowing it down to a gentle crawl. And there’s some mica power in the water, so you can really see the difference in velocity. Now let’s add some sediment. I bought these bags of colored sand, and I’m just going to dump them in the sump where my pump is recirculating this water through the flume. And watch what happens in the time lapse. The swift flow of the river carries the sand downstream, but as soon as it transitions into the slow flow of the reservoir, it starts to fall out of suspension. It’s a messy process at first. The sand kind of goes all over the place. But slowly, you can see it start to form a delta right where the river meets the reservoir. Of course, the river speeds up as it climbs over the delta, so the next batch of sediment doesn’t fall out until it’s on the downstream end. And each batch of sand that I dump into the pump just adds to it. The mass of sediment just slowly fills the reservoir, marching toward the dam. This looks super cool. In fact, I thought it was such a nice representation that I worked with an illustrator to help me make a print of it. We’re only going to print a limited run of these, so there's a link to the store down below if you want to pick one up. But, even though it looks cool, I want to be clear that it’s not a good thing. Some dams are built intentionally to hold sediment back, but in the vast majority of cases, this is an unwanted side effect of impounding water within a river valley. For most reservoirs, the whole point is to store water - for controlling floods, generating electricity, drinking, irrigation, cooling power plants, etc. So, as sediment displaces more and more of the reservoir volume, the value that reservoir provides goes down. And that’s not the only problem it causes. Making reservoirs shallower limits their use for recreation by reducing the navigable areas and fostering more unwanted algal blooms. Silt and sand can clog up gates and outlets to the structure and damage equipment like turbines. Sediment can even add forces to a dam that might not have been anticipated during design. Dirt is heavier than water. Let me prove that to you real quick. It’s a hard enough job to build massive structures that can hold back water, and sediment only adds to the difficulty. But I think the biggest challenge of this issue is that it’s inevitable, right? There are no natural rivers or streams that don’t carry some sediments along with them. The magnitude does vary by location. The world’s a big place, and for better or worse, we’ve built a lot of dams across rivers. There are a lot of factors that affect how quickly this truly becomes an issue at a reservoir, mostly things that influence water-driven erosion on the land upstream. Soil type is a big one; sandy soils erode faster than silts and clays (that’s why I used sand in the model). Land use is another big one. Vegetated areas like forests and grasslands hold onto their soil better than agricultural land or areas affected by wildfires. But in nearly all cases, without intervention, every reservoir will eventually fill up. Of course, that’s not good, but I don’t think there’s a lot of appreciation outside of a small community of industry professionals and activists for just how bad it is. Dams are among the most capital-intensive projects that we humans build. We literally pour billions of dollars into them, sometimes just for individual projects. This is kind of its own can of worms, but I’m just speaking generally that society often accepts pretty significant downsides in addition to the monetary costs, like environmental impacts and the risk of failure to downstream people and property in return for the enormous benefits dams can provide. And sedimentation is one of those problems that happens over a lifetime, so it’s easy at the beginning of a project to push it off to the next generation to fix. Well, the heyday of dam construction was roughly the 1930s through the 70s. So here we are starting to reckon with it, while being more dependent than ever on those dams. And there aren’t a lot of easy answers. To some extent, we consider sediment during design. Modern dams are built to withstand the forces, and the reservoir usually has what’s called a “dead pool,” basically a volume that is set aside for sediment from the beginning. Low-level gates sit above the dead pool so they don’t get clogged. But that’s not so much a solution as a temporary accommodation since THIS kind of deadpool doesn’t live forever. I think for most, the simplest idea is this: if there’s dirt in the lake, just take it out. Dredging soil is really not that complicated. We’ve been doing it for basically all of human history. And in some cases, it really is the only feasible solution. You can put an excavator on a barge, or a crane with a clamshell bucket, and just dig. Suction dredgers do it like an enormous vacuum cleaner, pumping the slurry to a barge or onto shore. But that word feasible is the key. The whole secret of building a dam across a valley is that you only have to move and place a comparatively small amount of material to get a lot of storage. Depending on the topography and design, every unit of volume of earth or concrete that makes up the dam itself might result in hundreds up to tens of thousands of times that volume of storage in the reservoir. But for dredging, it’s one-to-one. For every cubic meter of storage you want back, you have to remove it as soil from the reservoir. At that point, it’s just hard for the benefits to outweigh the costs. There’s a reason we don’t usually dig enormous holes to store large volumes of water. I mean, there are a lot of reasons, but the biggest one is just cost. Those 5 million tons of sediment that flow into Lewis and Clark Reservoir would fill around 200,000 end-dump semi-trailers. That’s every year, and it’s assuming you dry it out first, which, by the way, is another challenge of dredging: the spoils aren’t like regular soil. For one, they’re wet. That water adds volume to the spoils, meaning you have more material to haul away or dispose of. It also makes the spoils difficult to handle and move around. There are a lot of ways to dry them out or “dewater” them as the pros say. One of the most common is to pump spoils into geotubes, large fabric bags that hold the soil inside while letting the water slowly flow out. But it’s still extra work. And for two, sometimes sediments can be contaminated with materials that have washed off the land upstream. In that case, they require special handling and disposal. Many countries have pretty strict environmental rules about dredging and disposal of spoils, so you can see how it really isn’t a simple solution to sedimentation, and for most cases, it often just isn’t worth the cost. Another option for getting rid of sediment is just letting it flow through the dam. This is ideal because, as I mentioned before, sediment serves a lot of important functions in a river system. If you can let it continue on its journey downstream, in many ways, you’ve solved two problems in one, and there are a lot of ways to do this. Some dams have a low-level outlet that consistently releases turbid water that reaches the dam. But if you remember back to the model, not all of it does. In fact, in most cases, the majority of sediment deposits furthest from the dam, and most of it doesn’t reach the dam until the reservoir is pretty much full. Of course, my model doesn’t tell the whole story; it’s basically a 2D example with only one type of soil. As with all sediment transport phenomena, things are always changing. In fact, I decided to leave the model running with a time-lapse just to see what would happen. You can really get a sense of how dynamic this process can be. Again, it’s a very cool demonstration. But in most cases, much of the sediment that deposits in a reservoir is pretty much going to stay where it falls or take years and years before it reaches the dam. So, another option is to flush the reservoir. Just set the gates to wide open to get the velocity of water fast enough to loosen and scour the sediment, resuspending it so it can move downstream. I tried this in the model, and it worked pretty well. But again, this is just a 2D representation. In a real reservoir that has width, flushing usually just creates a narrow channel, leaving most of the sediment in place. And, inevitably, this requires drawing down the reservoir, essentially wasting all the water. And more importantly than that, it sends a massive plume of sediment laden water downstream. I’ve harped on the fact that we want sediment downstream of dams and that’s where it naturally belongs, but you can overdo it. Sediment can be considered a pollutant, and in fact, it’s regulated in the US as one. That’s why you see silt fences around construction sites. So the challenge of releasing sediment from a dam is to match the rate and quantity to what it would be if the dam wasn’t there. And that’s a very tough thing to do because of how variable those rates can be, because sediment doesn’t flow the same in a reservoir as it would in a river, because of the constraints it puts on operations (like the need to draw reservoirs down) and because of the complicated regulatory environment surrounding the release of sediments into natural waterways. The third major option for dealing with the problem is just reducing the amount of sediment that makes it to a reservoir in the first place. There are some innovations in capturing sediment upstream, like bedload interceptors that sit in streams and remove sediment over time. You can fight fire with fire by building check dams to trap sediment, but then you’ve just solved reservoir sedimentation by creating reservoir sedimentation. As I mentioned, those sediment loads depend a lot not only on the soil types in the watershed, but also on the land use or cover. Soil conservation is a huge field, and has played a big role in how we manage land in the US since the Dust Bowl of the 1930s. We have a whole government agency dedicated to the problem and a litany of strategies that reduce erosion, and many other countries have similar resources. A lot of those strategies involve maintaining good vegetation, preventing wildfires, good agricultural practices, and reforestation. But you have to consider the scale. Watersheds for major reservoirs can be huge. Lewis and Clark Reservoir’s catchment is about 16,000 square miles (41,000 square kilometers). That’s larger than all of Maryland! Management of an area that size is a complicated endeavor, especially considering that you have to do it over a long duration. So in many cases, there’s only so much you can do to keep sediment at bay. And really, that’s just an overview. I use Lewis and Clark Reservoir as an example, but like I said, this problem extends to essentially every on-channel reservoir across the globe. And the scope of the problem has created a huge variety of solutions I could spend hours talking about. And I think that’s encouraging. Even though most of the solutions aren’t easy, it doesn’t mean we can’t have infrastructure that’s sustainable over the long term, and the engineering lessons learned from past shortsightedness have given us a lot of new tools to make the best use of our existing infrastructure in the future.

yesterday 5 votes
Years After the Early Death of a Math Genius, Her Ideas Gain New Life

A new proof extends the work of the late Maryam Mirzakhani, cementing her legacy as a pioneer of alien mathematical realms. The post Years After the Early Death of a Math Genius, Her Ideas Gain New Life first appeared on Quanta Magazine

2 days ago 3 votes
The New TIGR-Tas Gene Editing System

Remember CRISPR (clustered regularly interspaced short palindromic repeats) – that new gene-editing system which is faster and cheaper than anything that came before it? CRISPR is derived from bacterial systems which uses guide RNA to target a specific sequence on a DNA strand. It is coupled with a Cas (CRISPR Associated) protein which can do […] The post The New TIGR-Tas Gene Editing System first appeared on NeuroLogica Blog.

2 days ago 3 votes