Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
11
Lots of news in the last few days regarding federal funding of university research: NSF has now frozen all funding for new and continuing awards.  This is not good; just how bad it is depends on the definition of "until further notice".   Here is an open letter from the NSF employees union to the basically-silent-so-far National Science Board, asking for the NSB to support the agency. Here is a grass roots SaveNSF website with good information and suggestions for action - please take a look. NSF also wants to cap indirect cost rates at 15% for higher ed institutions for new awards.  This will almost certainly generate a law suit from the AAU and others.   Speaking of the AAU, last week there was a hearing in the Massachusetts district court regarding the lawsuits about the DOE setting indirect cost rates to 15% for active and new awards.  There had already been a temporary restraining order in place nominally stopping the change; the hearing resulted in that order being extended...
a month ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from nanoscale views

Brief items - fresh perspectives, some news bits

As usual, I hope to write more about particular physics topics soon, but in the meantime I wanted to share a sampling of news items: First, it's a pleasure to see new long-form writing about condensed matter subjects, in an era where science blogging has unquestionably shrunk compared to its heyday.  The new Quantum Matters substack by Justin Wilson (and William Shelton) looks like it will be a fun place to visit often. Similar in spirit, I've also just learned about the Knowmads podcast (here on youtube), put out by Prachi Garella and Bhavay Tyagi, two doctoral students at the University of Houston.  Fun Interviews with interesting scientists about their science and how they get it done.   There have been some additional news bits relevant to the present research funding/university-govt relations mess.  Earlier this week, 200 business leaders published an open letter about how the slashing support for university research will seriously harm US economic competitiveness.  More of this, please.  I continue to be surprised by how quiet technology-related, pharma, and finance companies are being, at least in public.  Crushing US science and engineering university research will lead to serious personnel and IP shortages down the line, definitely poor for US standing.  Again, now is the time to push back on legislators about cuts mooted in the presidential budget request.   The would-be 15% indirect cost rate at NSF has been found to be illegal, in a summary court judgment released yesterday.  (Brief article here, pdf of the ruling here.) Along these lines, there are continued efforts for proposals about how to reform/alter indirect cost rates in a far less draconian manner.  These are backed by collective organizations like the AAU and COGR.  If you're interested in this, please go here, read the ideas, and give some feedback.  (Note for future reference:  the Joint Associations Group (JAG) may want to re-think their acronym.  In local slang where I grew up, the word "jag" does not have pleasant connotations.) The punitive attempt to prevent Harvard from taking international students has also been stopped for now in the courts.

6 hours ago 1 votes
So you want to build a science/engineering laboratory building

A very quick summary of some non-negative news developments: The NSF awarded 500 more graduate fellowships this week, bringing the total for this year up to 1500.  (Apologies for the X link.)  This is still 25% lower than last year's number, and of course far below the original CHIPS and Science act target of 3000, but it's better than the alternative.  I think we can now all agree that the supposed large-scale bipartisan support for the CHIPS and Science act was illusory. There seems to be some initial signs of pushback on the senate side regarding the proposed massive science funding cuts.  Again, now is the time to make views known to legislators - I am told by multiple people with experience in this arena that it really can matter. There was a statement earlier this week that apparently the US won't be going after Chinese student visas.  This would carry more weight if it didn't look like US leadership was wandering ergodically through all possible things to say with no actual plan or memory. On to the main topic of this post.  Thanks to my professional age (older than dirt) and my experience (overseeing shared research infrastructure; being involved in a couple of building design and construction projects; and working on PI lab designs and build-outs), I have some key advice and lessons learned for anyone designing a new big science/engineering research building.  This list is by no means complete, and I invite readers to add their insights in the comments.  While it seems likely that many universities will be curtailing big capital construction projects in the near term because of financial uncertainty, I hope this may still come in handy to someone.   Any big laboratory building should have a dedicated loading dock with central receiving.  If you're spending $100M-200M on a building, this is not something that you should "value engineer" away.  The long term goal is a building that operates well for the PIs and is easy to maintain, and you're going to need to be able to bring in big crates for lab and service equipment.  You should have a freight elevator adjacent to the dock.   You should also think hard about what kind of equipment will have to be moved in and out of the building when designing hallways, floor layouts, and door widths.  You don't want to have to take out walls, doorframes, or windows, or to need a crane to hoist equipment into upper floors because it can't get around corners. Think hard about process gasses and storage tanks at the beginning.  Will PIs need to have gas cylinders and liquid nitrogen and argon tanks brought in and out in high volumes all the time, with all the attendant safety concerns?  Would you be better off getting LN2 or LAr tanks even though campus architects will say they are unsightly?   Likewise, consider whether you should have building-wide service for "lab vacuum", N2 gas, compressed air, DI water, etc.  If not and PIs have those needs, you should plan ahead to deal with this. Gas cylinder and chemical storage - do you have enough on-site storage space for empty cylinders and back-up supply cylinders?  If this is a very chemistry-heavy building, think hard about safety and storing solvents.  Make sure you design for adequate exhaust capacity for fume hoods.  Someone will always want to add more hoods.  While all things are possible with huge expenditures, it's better to make sure you have capacity to spare, because adding hoods beyond the initial capacity would likely require a huge redo of the building HVAC systems. Speaking of HVAC, think really hard about controls and monitoring.  Are you going to have labs that need tight requirements on temperature and humidity?  When you set these up, put have enough sensors of the right types in the right places, and make sure that your system is designed to work even when the outside air conditions are at their seasonal extremes (hot and humid in the summer, cold and dry in the winter).  Also, consider having a vestibule (air lock) for the main building entrance - you'd rather not scoop a bunch of hot, humid air (or freezing, super-dry air) into the building every time a student opens the door. Still on HVAC, make sure that power outages and restarts don't lead to weird situations like having the whole building at negative pressure relative to the outside, or duct work bulging or collapsing. Still on HVAC, actually think about where the condensate drains for the fan units will overflow if they get plugged up or overwhelmed.  You really don't want water spilling all over a rack of networking equipment in an IT closet.  Trust me. Chilled water:  Whether it's the process chilled water for the air conditioning, or the secondary chilled water for lab equipment, make sure that the loop is built correctly.   Incompatible metals (e.g., some genius throws in a cast iron fitting somewhere, or joints between dissimilar metals) can lead to years and years of problems down the line.  Make sure lines are flushed and monitored for cleanliness, and have filters in each lab that can be checked and maintained easily. Electrical - design with future needs in mind.  If possible, it's a good idea to have PI labs with their own isolation transformers, to try to mitigate inter-lab electrical noise issues.  Make sure your electrical contractors understand the idea of having "clean" vs. "dirty" power and can set up the grounding accordingly while still being in code. Still on electrical, consider building-wide surge protection, and think about emergency power capacity.  For those who don't know, emergency power is usually a motor-generator that kicks in after a few seconds to make sure that emergency lighting and critical systems (including lab exhaust) keep going. Ceiling heights, duct work, etc. - It's not unusual for some PIs to have tall pieces of equipment.  Think about how you will accommodate these.  Pits in the floors of basement labs?  5 meter slab-to-slab spacing?  Think also about how ductwork and conduits are routed.  You don't want someone to tell you that installation of a new apparatus is going to cost a bonus $100K because shifting a duct sideways by half a meter will require a complete HVAC redesign. Think about the balance between lab space and office space/student seating.  No one likes giant cubicle farm student seating, but it does have capacity.  In these days of zoom and remote access to experiments, the way students and postdocs use offices is evolving, which makes planning difficult.  Health and safety folks would definitely prefer not to have personnel effectively headquartered directly in lab spaces.  Seriously, though, when programming a building, you need to think about how many people per PI lab space will need places to sit.  I have yet to see a building initially designed with enough seating to handle all the personnel needs if every PI lab were fully occupied and at a high level of research activity.  Think about maintenance down the line.  Every major building system has some lifespan.  If a big air handler fails, is it accessible and serviceable, or would that require taking out walls or cutting equipment into pieces and disrupting the entire building?  Do you want to set up a situation where you may have to do this every decade?  (Asking for a friend.) Entering the realm of fantasy, use your vast power and influence to get your organization to emphasize preventative maintenance at an appropriate level, consistently over the years.  Universities (and national labs and industrial labs) love "deferred maintenance" because kicking the can down the road can make a possible cost issue now into someone else's problem later.  Saving money in the short term can be very tempting.  It's also often easier and more glamorous to raise money for the new J. Smith Laboratory for Physical Sciences than it is to raise money to replace the HVAC system in the old D. Jones Engineering Building.  Avoid this temptation, or one day (inevitably when times are tight) your university will notice that it has $300M in deferred maintenance needs. I may update this list as more items occur to me, but please feel free to add input/ideas.

a week ago 8 votes
A precision measurement science mystery - new physics or incomplete calculations?

Again, as a distraction from persistently concerning news, here is a science mystery of which I was previously unaware. The role of approximations in physics is something that very often comes as a shock to new students.  There is this cultural expectation out there that because physics is all about quantitative understanding of physical phenomena, and the typical way we teach math and science in K12 education, we should be able to get exact solutions to many of our attempts to model nature mathematically.   In practice, though, constructing physics theories is almost always about approximations, either in the formulation of the model itself (e.g. let's consider the motion of an electron about the proton in the hydrogen atom by treating the proton as infinitely massive and of negligible size) or in solving the mathematics (e.g., we can't write an exact analytical solution of the problem when including relativity, but we can do an order-by-order expansion in powers of \(p/mc\)).  Theorists have a very clear understanding of what means to say that an approximation is "well controlled" - you know on both physical and mathematical grounds that a series expansion actually converges, for example.   Some problems are simpler than others, just by virtue of having a very limited number of particles and degrees of freedom, and some problems also lend themselves to high precision measurements.  The hydrogen atom problem is an example of both features.  Just two spin-1/2 particles (if we approximate the proton as a lumped object) and readily accessible to optical spectroscopy to measure the energy levels for comparison with theory.  We can do perturbative treatments to account for other effects of relativity, spin-orbit coupling, interactions with nuclear spin, and quantum electrodynamic corrections (here and here).  A hallmark of atomic physics is the remarkable precision and accuracy of these calculations when compared with experiment.  (The \(g\)-factor of the electron is experimentally known to a part in \(10^{10}\) and matches calculations out to fifth order in \(\alpha = e^2/(4 \pi \epsilon_{0}\hbar c)\).).   The helium atom is a bit more complicated, having two electrons and a more complicated nucleus, but over the last hundred years we've learned a lot about how to do both calculations and spectroscopy.   As explained here, there is a problem.  It is possible to put helium into an excited metastable triplet state with one electron in the \(1s\) orbital, the other electron in the \(2s\) orbital, and their spins in a triplet configuration.  Then one can measure the ionization energy of that system - the minimum energy required to kick an electron out of the atom and off to infinity.  This energy can be calculated to seventh order in \(\alpha\), and the theorists think that they're accounting for everything, including the finite (but tiny) size of the nucleus.  The issue:  The calculation and the experiment differ by about 2 nano-eV.  That may not sound like a big deal, but the experimental uncertainty is supposed to be a little over 0.08 nano-eV, and the uncertainty in the calculation is estimated to be 0.4 nano-eV.  This works out to something like a 9\(\sigma\) discrepancy.  Most recently, a quantitatively very similar discrepancy shows up in the case of measurements performed in 3He rather than 4He.   This is pretty weird.  Historically, it would seem that the most likely answer is a problem with either the measurements (though that seems doubtful, since precision spectroscopy is such a well-developed set of techniques), the calculation (though that also seems weird, since the relevant physics seems well known), or both.  The exciting possibility is that somehow there is new physics at work that we don't understand, but that's a long shot.  Still, something fun to consider (as my colleagues (and I) try to push back on the dismantling of US scientific research.)

2 weeks ago 10 votes
Pushing back on US science cuts: Now is a critical time

Every week has brought more news about actions that, either as a collateral effect or a deliberate goal, will deeply damage science and engineering research in the US.  Put aside for a moment the tremendously important issue of student visas (where there seems to be a policy of strategic vagueness, to maximize the implicit threat that there may be selective actions).  Put aside the statement from a Justice Department official that there is a general plan is to "bring these universities to their knees", on the pretext that this is somehow about civil rights.   The detailed version of the presidential budget request for FY26 is now out (pdf here for the NSF portion).  If enacted, it would be deeply damaging to science and engineering research in the US and the pipeline of trained students who support the technology sector.  Taking NSF first:  The topline NSF budget would be cut from $8.34B to $3.28B.  Engineering would be cut by 75%, Math and Physical Science by 66.8%.  The anticipated agency-wide success rate for grants would nominally drop below 7%, though that is misleading (basically taking the present average success rate and cutting it by 2/3, while some programs are already more competitive than others.).  In practice, many programs already have future-year obligations, and any remaining funds will have to go there, meaning that many programs would likely have no awards at all in the coming fiscal year.  The NSF's CAREER program (that agency's flagship young investigator program) would go away  This plan would also close one of the LIGO observatories (see previous link).  (This would be an extra bonus level of stupid, since LIGO's ability to do science relies on having two facilities, to avoid false positives and to identify event locations in the sky.  You might as well say that you'll keep an accelerator running but not the detector.)  Here is the table that I think hits hardest, dollars aside: The number of people involved in NSF activities would drop by 240,000.  The graduate research fellowship program would be cut by more than half.  The NSF research training grant program (another vector for grad fellowships) would be eliminated.   The situation at NIH and NASA is at least as bleak.  See here for a discussion from Joshua Weitz at Maryland which includes this plot:  This proposed dismantling of US research and especially the pipeline of students who support the technology sector (including medical research, computer science, AI, the semiconductor industry, chemistry and chemical engineering, the energy industry) is astonishing in absolute terms.  It also does not square with the claim of some of our elected officials and high tech CEOs to worry about US competitiveness in science and engineering.  (These proposed cuts are not about fiscal responsibility; just the amount added in the proposed DOD budget dwarfs these cuts by more than a factor of 3.) If you are a US citizen and think this is the wrong direction, now is the time to talk to your representatives in Congress. In the past, Congress has ignored presidential budget requests for big cuts.  The American Physical Society, for example, has tools to help with this.  Contacting legislators by phone is also made easy these days.  From the standpoint of public outreach, Cornell has an effort backing large-scale writing of editorials and letters to the editor.

2 weeks ago 8 votes
Quick survey - machine shops and maker spaces

Recent events are very dire for research at US universities, and I will write further about those, but first a quick unrelated survey for those at such institutions.  Back in the day, it was common for physics and some other (mechanical engineering?) departments to have machine shops with professional staff.  In the last 15-20 years, there has been a huge growth in maker-spaces on campuses to modernize and augment those capabilities, though often maker-spaces are aimed at undergraduate design courses rather than doing work to support sponsored research projects (and grad students, postdocs, etc.).  At the same time, it is now easier than ever (modulo tariffs) to upload CAD drawings to a website and get a shop in another country to ship finished parts to you. Quick questions:   Does your university have a traditional or maker-space-augmented machine shop available to support sponsored research?  If so, who administers this - a department, a college/school, the office of research?  Does the shop charge competitive rates relative to outside vendors?  Are grad students trained to do work themselves, and are there professional machinists - how does that mix work? Thanks for your responses.  Feel free to email me if you'd prefer to discuss offline.

3 weeks ago 12 votes

More in science

Brief items - fresh perspectives, some news bits

As usual, I hope to write more about particular physics topics soon, but in the meantime I wanted to share a sampling of news items: First, it's a pleasure to see new long-form writing about condensed matter subjects, in an era where science blogging has unquestionably shrunk compared to its heyday.  The new Quantum Matters substack by Justin Wilson (and William Shelton) looks like it will be a fun place to visit often. Similar in spirit, I've also just learned about the Knowmads podcast (here on youtube), put out by Prachi Garella and Bhavay Tyagi, two doctoral students at the University of Houston.  Fun Interviews with interesting scientists about their science and how they get it done.   There have been some additional news bits relevant to the present research funding/university-govt relations mess.  Earlier this week, 200 business leaders published an open letter about how the slashing support for university research will seriously harm US economic competitiveness.  More of this, please.  I continue to be surprised by how quiet technology-related, pharma, and finance companies are being, at least in public.  Crushing US science and engineering university research will lead to serious personnel and IP shortages down the line, definitely poor for US standing.  Again, now is the time to push back on legislators about cuts mooted in the presidential budget request.   The would-be 15% indirect cost rate at NSF has been found to be illegal, in a summary court judgment released yesterday.  (Brief article here, pdf of the ruling here.) Along these lines, there are continued efforts for proposals about how to reform/alter indirect cost rates in a far less draconian manner.  These are backed by collective organizations like the AAU and COGR.  If you're interested in this, please go here, read the ideas, and give some feedback.  (Note for future reference:  the Joint Associations Group (JAG) may want to re-think their acronym.  In local slang where I grew up, the word "jag" does not have pleasant connotations.) The punitive attempt to prevent Harvard from taking international students has also been stopped for now in the courts.

6 hours ago 1 votes
An Explicit Computation in Derived Algebraic Geoemtry

Earlier this week my friend Shane and I took a day and just did a bunch of computations. In the morning we did some differential geometry, where he told me some things about what he’s doing with symplectic lie algebroids. We went to get lunch, and then in the afternoon we did some computations in derived algebraic geometry. I already wrote a blog post on the differential geometry, and now I want to write one on the derived stuff too! I’m faaaaar from an expert in this stuff, and I’m sure there’s lots of connections I could make to other subjects, or interesting general theorem statements which have these computations as special cases… Unfortunately, I don’t know enough to do that, so I’ll have to come back some day and write more blog posts once I know more! I’ve been interested in derived geometry for a long time now, and I’ve been sloooowly chipping away at the prerequisites – $\infty$-categories and model categories, especially via dg-things, “classical” algebraic geometry (via schemes), and of course commutative and homological algebra. I’m lucky that a lot of these topics have also been useful in my thesis work on fukaya categories and TQFTs, which has made the time spent on them easy to justify! I’ve just started reading a book which I hope will bring all these ideas together – Towards the Mathematics of Quantum Field Theory by Frédéric Paugam. It seems intense, but at least on paper I know a lot of the things he’s planning to talk about, and I’m hoping it makes good on its promise to apply its techniques to “numerous examples”. If it does, I’m sure it’ll help me understand things better so I can share them here ^_^. In this post we’ll do two simple computations. In both cases we have a family of curves where something weird happens at a point, and in the “clasical” case this weirdness manifests as a discontinuity in some invariant. But by working with a derived version of the invariant we’ll see that at most points the classical story and the derived story agree, but at the weird point the derived story contains ~bonus information~ that renders the invariant continuous after all! Ok, let’s actually see this in action! First let’s look at what happens when we intersect two lines through the origin. This is the example given in Paugam’s book that made me start thinking about this stuff again. Let’s intersect the $x$-axis (the line $y=0$) with the line $y=mx$ as we vary $m$. This amounts to looking at the schemes $\text{Spec}(k[x,y] \big / y)$ and $\text{Spec}(k[x,y] \big / y-mx)$. Their intersection is the pullback and so since everything in sight is affine, we can compute this pullback in $\text{Aff} = \text{CRing}^\text{op}$ as a pushout in $\text{CRing}$: Pushouts in $\text{CRing}$ are given by (relative) tensor products, and so we compute which is $k$ when $m \neq 0$ and is $k[x]$ when $m=0$, so we’ve learned that: When $m \neq 0$ the intersection of \(\{y=0\}\) and \(\{y=mx\}\) is $\text{Spec}(k)$ – a single point1. When $m = 0$ the intersection is $\text{Spec}(k[x])$ – the whole $x$-axis. This is, of course, not surprising at all! We didn’t really need any commutative algebra for this, since we can just look at it! The fact that the dimension of the intersection jumps suddenly is related to the lack of flatness in the family of intersections $k[x,y,m] \big /(y, y-mx) \to k[m]$. Indeed, this doesn’t look like a very flat family! We can also see it isn’t flat algebraically since tensoring with $k[x,y,m] \big / (y, y-mx)$ doesn’t preserve the exact sequence2 In the derived world, though, things are better. It’s my impression that here flatness is a condition guaranteeing the “naive” underived computation agrees with the “correct” derived computation. That is, flat modules $M$ are those for which $M \otimes^\mathbb{L} -$ and $M \otimes -$ agree for all modules $X$! I think that one of the benefits of the derived world is that we can pretend like “all families are flat”. I would love if someone who knows more about this could chime in, though, since I’m not confident enough to really stand by that. In our particular example, though, this is definitely true! To see this we need to compute the derived tensor product of $k[x,y] \big / (y)$ and $k[x,y] \big / (y-mx)$ as $k[x,y]$-algebras. To do this we need to know the right notion of “projective resolution” (it’s probably better to say cofibrant replacement), and we can build these from (retracts of) semifree commutative dg algebras in much the same way we build projective resolutions from free things! Here “semifree” means that our algebra is a free commutative graded algebra if we forget about the differential. Of course, “commutative” here is in the graded sense that $xy = (-1)^{\text{deg}(x) \text{deg}(y)}yx$. For example, if we work over the base field $k$, then the free commutative graded algebra on $x_0$ (by which I mean an element $x$ living in degree $0$) is just the polynomial algebra $k[x]$ all concentrated in degree $0$. Formally, we have elements $1, \ x_0, \ x_0 \otimes x_0, \ x_0 \otimes x_0 \otimes x_0, \ldots$, and the degree of a tensor is the sum of the degrees of the things we’re tensoring, so that for $x_0$ the whole algebra ends up concentrated in degree $0$. If we look at the free graded $k$-algebra on $x_1$, we again get an algebra generated by $x_1, \ x_1 \otimes x_1, \ x_1 \otimes x_1 \otimes x_1, \ldots$ except that now we have the anticommutativity relation $x_1 \otimes x_1 = (-1)^{1 \cdot 1} x_1 \otimes x_1$ so that $x_1 \otimes x_1 = 0$. This means the free graded $k$-algebra on $x_1$ is just the algebra with $k$ in degree $0$, the vector space generated by $x$ in degree $1$, and the stipulation that $x^2 = 0$. In general, elements in even degrees contribute symmetric algebras and elements in odd degrees contribute exterior algebras to the cga we’re freely generating. What does this mean for our example? We want to compute the derived tensor product of $k[x,y] \big / y$ and $k[x,y] \big / y-mx$. As is typical in homological algebra, all we need to do is “resolve” one of our algebras and then take the usual tensor product of chain complexes. Here a resolution means we want a semifree cdga which is quasi-equivalent to the algebra we started with, and it’s easy to find one! Consider the cdga $k[x,y,e]$ where $x,y$ live in degree $0$ and $e$ lives in degree $1$. The differential sends $de = y$, and must send everything else to $0$ by degree considerations (there’s nothing in degree $-1$). This cdga is semifree as a $k[x,y]$-algebra, since if you forget the differential it’s just the free graded $k[x,y]$ algebra on a degree 1 generator $e$! So this corresponds to the chain complex where $de = y$ is $k[x,y]$ linear so that more generally $d(pe) = p(de) = py$ for any polynomial $p \in k[x,y]$. If we tensor this (over $k[x,y]$) with $k[x,y] \big / y-mx$ (concentrated in degree $0$) we get a new complex where the interesting differential sends $pe \mapsto ey$ for any polynomial $p \in k[x,y] \big / y-mx$. Some simplification gives the complex whose homology is particularly easy to compute! $H_0 = k[x] \big / mx$ $H_1 = \text{Ker}(mx)$ We note that $H_0$ recovers our previous computation, where when $m \neq 0$ we have $H_0 = k$ is the coordinate ring of the origin3 and when $m=0$ we have $H_0 = k[x]$ is the coordinate ring of the $x$-axis. However, now there’s more information stored in $H_1$! In the generic case where $m \neq 0$, the differential $mx$ is injective so that $H_1$ vanishes, and our old “classical” computation saw everything there is to see. It’s not until we get to the singular case where $m=0$ that we see $H_1 = \text{Ker}(mx)$ becomes the kernel of the $0$-map, which is all of $k[x]$! The version of “dimension” for chain complexes which is invariant under quasi-isomorphism is the euler characteristic, and we see that now the euler characteristic is constantly $0$ for the whole family! Next let’s look at some kind of “hidden smoothness” by examining the singular curve $y^2 =x^3$. Just for fun, let’s look at another family of (affine) curves $y^2 = x^3 + tx$, which are smooth whenever $t \neq 0$. We’ll again show that in the derived world the singular fibre looks more like the smooth fibres. Smoothness away from the $t=0$ fibre is an easy computation, since we compute the jacobian of the defining equation $y^2 - x^3 - tx$ to be $\langle -3x^2 - t, 2y \rangle$, and for $t \neq 0$ this is never $\langle 0, 0 \rangle$ for any point on our curve4 (We’ll work in characteristic 0 for safety). Of course, when $t=0$ $\langle -3x^2, 2y \rangle$ vanishes at origin, so that it has a singular point there. To see the singularity, let’s compute the tangent space at $(0,0)$ for every curve in this family. We’ll do that by computing the space of maps from the “walking tangent vector” $\text{Spec}(k[\epsilon] \big / \epsilon^2)$ to our curve which deform the map from $\text{Spec}(k)$ to our curve representing our point of interest $(0,0)$. Since everything is affine, we turn the arrows around and see we want to compute the space of algebra homs so that the composition with the map $k[\epsilon] \big / \epsilon^2 \to k$ sending $\epsilon \mapsto 0$ becomes the map $k[x,y] \big / (y^2 - x^3 - tx) \to k$ sending $x$ and $y$ to $0$. Since $k[x,y] \big / (y^2 - x^3 - tx)$ is a quotient of a free algebra, this is easy to do! We just consult the universal property, and we find a hom $k[x,y] \big / (y^2 - x^3 - tx) \to k[\epsilon] \big / \epsilon^2$ is just a choice of image $a+b\epsilon$ for $x$ and $c+d\epsilon$ for $y$, so that the equation $y^2 - x^3 - tx$ is “preserved” in the sense that $(c+d\epsilon)^2 - (a+b\epsilon)^3 - t(a+b\epsilon)$ is $0$ in $k[\epsilon] \big / \epsilon^2$. Then the “deforming the origin” condition says that moreover when we set $\epsilon = 0$ our composite has to send $x$ and $y$ to $0$. Concretely that means we must choose $a=c=0$ in the above expression, so that finally: The tangent space at the origin of $k[x,y] \big / (y^2 - x^3 - tx)$ is the space of pairs $(b,d)$ so that $(d \epsilon)^2 - (b \epsilon)^3 - t(b \epsilon) = 0$ in $k[\epsilon] \big / \epsilon^2$. Of course, this condition holds if and only if $tb=0$, so that: When $t \neq 0$ the tangent space is the space of pairs $(b,d)$ with $b=0$, which is one dimensional. When $t = 0$ the tangent space is the space of pairs $(b,d)$ with no further conditions, which is two dimensional! Since we’re looking at curve, we expect the tangent space to be $1$-dimensional, and this is why we say there’s a singularity at the origin for the curve $y^2 = x^3$….. But what happens in the derived world? Now we want to compute the derived homspace. As before, a cofibrant replacement of our algebra is easy to find, it’s just $k[x,y,e]$ where $x$ and $y$ have degree $0$, $e$ has degree $1$ and and $de = y^2 - x^3 - tx$. Note that in our last example we were looking at quasifree $k[x,y]$-algebras, but now we just want $k$-algebras! So now this is the free graded $k$-algebra on 3 generators $x,y,e$, and our chain complex is: We want to compute the derived $\text{Hom}^\bullet(-,-)$ from this algebra to $k[\epsilon] \big / \epsilon^2$, concentrated in degree $0$. The degree $0$ maps are given by diagrams that don’t need to commute5! Of course, such maps are given by pairs $(a + b \epsilon, c + d \epsilon)$, which are the images of $x$ and $y$. As before, since we want the tangent space at $(0,0)$ we need to restrict to those pairs with $a=c=0$ so that $\text{Hom}^0(k[x,y] \big / y^2 - x^3 - tx, \ k[\epsilon] \big / \epsilon^2) = k^2$, generated by the pairs $(b,d)$. Next we look at degree $-1$ maps, which are given by diagrams which are given by a pair $r + s\epsilon$, the image of $e$. Again, these need to restrict to the $0$ map when we set $\epsilon=0$, so that $r=0$ and we compute $\text{Hom}^{-1}(k[x,y] \big / y^2 - x^3 - tx, \ k[\epsilon] \big / \epsilon^2) = k$, generated by $s$. So our hom complex is where the interesting differential sends degree $0$ to degree $-1$ and is given by $df = d_{k[\epsilon] \big / \epsilon^2} \circ f - f \circ d_{k[x,y] \big / y^2-x^3-tx}$. So if $f$ is the function sending $x \mapsto b \epsilon$ and $y \mapsto d \epsilon$ then we compute So phrased purely in terms of vector spaces we see our hom complex is (living in degrees $0$ and $-1$): So we compute $H^0 = \text{Ker} ((-t \ 0))$ $H^{-1} = \langle s \rangle \big / \text{Im}((-t \ 0))$ When $t \neq 0$, our map is full rank so that $H^0$ are the pairs $(b,d)$ with $b=0$ – agreeing with the classical computation. Then $H^{-1} = 0$, so again we learn nothing new in the smooth fibres. When $t=0$, however, our map is the $0$ map so that $H^0$ is the space of all pairs $(b,d)$ is two dimensional – again, agreeing with the classical computation! But now we see the $H^{-1}$ term, which is $1$ dimensional, spanned by $s$. Again, in the derived world, we see the euler characteristic is constantly $1$ along the whole family! There’s something a bit confusing here, since there seem to be two definitions of “homotopical smoothness”… On the one hand, in the noncommutative geometry literature, we say that a dga $A$ is “smooth” if it’s perfect as a bimodule over itself. On the other hand, though, I’ve also heard another notion of “homotopically smooth” where we say the cotangent complex is perfect. I guess it’s possible (likely?) that these should be closely related by some kind of HKR Theorem, but I don’t know the details. Anyways, I’m confused because we just computed that the curve $y^2 = x^3$ has a perfect tangent complex, which naively would make me think its cotangent complex is also perfect. But this shouldn’t be the case, since I also remember reading that a classical scheme is smooth in the sense of noncommutative geometry if and only if it’s smooth classically, which $y^2 = x^3$ obviously isn’t! Now that I’ve written these paragarphs and thought harder about things, I think I was too quick to move between perfectness of the tangent complex and perfectness of the cotangent complex, but I should probably compute the cotangent complex and the bimodule resolution to be sure… Unfortunately, that will have to wait for another day! I’ve spent probably too many hours over the last few days writing this and my other posts on lie algebroids. I have some kind of annoying hall algebra computations that are calling my name, and I have an idea about a new family of model categories which might be of interest… But checking that something is a model category is usually hard, so I’ve been dragging my feet a little bit. Plus, I need to start packing soon! I’m going to europe for a bunch of conferences in a row! First a noncommutative geometry summer school hosted by the institute formerly known as MSRI, then CT of course, and lastly a cute representation theory conference in Bonn. I’m sure I’ll learn a bunch of stuff I’ll want to talk about, so we’ll chat soon ^_^. Take care, all! In fact we know more! This $k$ is really $k[x,y] \big / (x=0,y=0)$, so we know the intersection point is $(0,0)$. ↩ Indeed after tensoring we get since here $k \cong k[m] \big / (m)$. But then we can simplify these to and indeed the leftmost map (multiplication by $m$) is not injective! The kernel is generated by $x$. ↩ Again, if you’re more careful with where this ring comes from, rather than just its isomorphism class, it’s $k[x,y] \big / (x,y)$, the quotient by the maximal ideal $(x,y)$ which represents the origin. ↩ The only place it could possibly be $\langle 0, 0 \rangle$ is when $y=0$, but the points on our curve with this property satisfy $x^3-tx=y^2=0$ so that when $t \neq 0$ the solutions are $(x,y) = (0,0), \ (\sqrt{t}, 0), \ (-\sqrt{t}, 0)$. But at all three of these points $\langle -3x^2 - t, 2y \rangle \neq \langle 0, 0 \rangle$. ↩ This is a misconception that I used to have, and which basically everyone I’ve talked to had at one point. Remember that dg-maps are all graded maps! Not just those which commute with the differential! The key point is that the differential on $\text{Hom}^\bullet(A,B)$ sends a degree $n$ map $f$ to so that $df = 0$ if and only if $d_B f = (-1)^n f d_A$ if and only if $f$ commutes with the differential (in the appropriate graded sense). This means that, for instance, $H^0$ of the hom complex recovers from all graded maps exactly $\text{Ker}(d) \big / \text{Im}(d)$, which are the maps commuting with $d$ modulo chain homotopy! ↩

yesterday 1 votes
Is Mathematics Mostly Chaos or Mostly Order?

Two new notions of infinity challenge a long-standing plan to define the mathematical universe. The post Is Mathematics Mostly Chaos or Mostly Order? first appeared on Quanta Magazine

2 days ago 3 votes
Renewables Did Not Cause Spanish Blackout, Investigations Find

In the aftermath of a massive blackout that hit Spain and Portugal in April, some pundits were quick to blame wind and solar for the loss of power. But official inquiries have found that a shortfall in conventional power led to the outages. Read more on E360 →

2 days ago 2 votes
Explicitly Computing The Action Lie Algebroid for $SL_2(\mathbb{R}) \curvearrowright \mathbb{R}^2$

This is going to be a very classic post, where we’ll chat about a computation my friend Shane did earlier today. His research is largely about symplectic lie algebroids, and recently we’ve been trying to understand the rich connections between poisson geometry, lie algebroids, lie groupoids, and eventually maybe fukaya categories of lie groupoids (following some ideas of Pascaleff). Shane knows much more about this stuff than I do, so earlier today he helped me compute a super concrete example. We got stuck at some interesting places along the way, and I think it’ll be fun to write this up, since I haven’t seen these kinds of examples written down in many places. Let’s get started! First, let’s recall what an action groupoid is. This is one of the main examples I have in my head for lie groupoids, which is why Shane and I started here. If $G$ is a group acting on a set $X$, then we get a groupoid: here we think of $X$ as the set of objects and $G \times X$ as the set of arrows, where the source of $(g,x)$ is just $x$ the target of $(g,x)$ is $g \cdot x$, using the action of $G$ on $X$ the identity arrow at $x$ is $(1,x)$ if $(g,x) : x \to y$ and $(h,y) : y \to z$, then the composite is $(hg,x) : x \to z$ if $(g,x) : x \to y$ then its inverse is $(g^{-1},y) : y \to x$. Action groupoids are interesting and important because they allow us to work with “stacky” nonhausdorff quotient spaces like orbifolds in a very fluent way. See, for instance, Moerdijk’s Orbifolds as Groupoids: An Introduction, which shows how you can easily define covering spaces, vector bundles, principal bundles, etc. on orbifolds using the framework of groupoids. The point is that a groupoid $E \rightrightarrows X$ is a “proof relevant equivalence relation” in the sense that $E$ keeps track of all the “proofs” or “witnesses” that two points in $X$ are identified, rather than just the statement that two points are identified. Indeed, we think of $e \in E$ as a witness identifying $s(e)$ and $t(e)$ in $X$. Then reflexivity comes from the identity arrow, symmetry comes from the inverse, and transitivty comes from composition. The “proof relevance” is just the statement that there might be multiple elements $e_1$ and $e_2$ which both identify $x$ and $y$ (that is, $s(e_1) = s(e_2) = x$ and $t(e_1) = t(e_2) = y$). By keeping track of this extra information that “$x$ and $y$ are related in multiple ways” we’re able to work with a smooth object (in the sense that both $E$ and $X$ are smooth) instead of the nonsmooth, or even nonhausdorff quotient $X/E$. Next, we know that a central part of the study of any lie group $G$ is its lie algebra $\mathfrak{g}$. This is a “linearization” of $G$ in the literal sense that it’s a vector space instead of a manifold, which makes it much easier to study. But through the lie bracket $[-,-]$ it remembers enough of the structure of $G$ to make it an indispensable tool for understanding $G$. See, for instance, Bump’s excellent book Lie Groups or Stillwell’s excellent Naive Lie Theory. With this in mind, just playing linguistic games we might ask if there’s a similarly central notion of “lie algebroid” attached to any “lie groupoid”. The answer turns out to be yes, but unlike the classical case, not every lie algebroid comes from a lie groupoid! We say that not every lie algebroid is integrable. This, finally, brings us to the computation that Shane and I did together: As explicitly as possible, let’s compute the lie algebroid coming from the action groupoid of $SL_2(\mathbb{R}) \curvearrowright \mathbb{R}^2$. Let’s start with a few definitions coming from Crainic, Fernandes, and Mărcuț’s Lectures on Poisson Geometry. A Lie Algebroid on a manifold $M$ is a vector bundle $A \to M$ whose space of global sections $\Gamma(A)$ has a lie bracket $[-,-]_A$, equipped with an anchor map $\rho : A \to TM$ to the tangent bundle of $M$ that’s compatible with the lie bracket in the sense that for $\alpha, \beta \in \Gamma(A)$ and $f \in C^\infty(M)$ we have \([\alpha, f \cdot \beta]_A = f \cdot [\alpha,\beta]_A + (\rho(\alpha) f) \cdot \beta\) Then, given a lie groupoid $E \rightrightarrows M$ with source and target maps $s,t : E \to M$ and identity map $r : M \to E$, its lie algebroid is explicitly given by letting $A = r^* \text{Ker}(dt)$, which is a vector bundle over $M$ $[-,-]_A$ coming from the usual bracket on $TE$ (since $\text{Ker}(dt)$ is a subbundle of $TE$) $\rho : A \to TM$ given by $ds$ (which is a map $TE \to TM$, thus restricts to a map on $\text{Ker}(dt)$, and so gives a map on the pullback $A = r^* \text{Ker}(dt)$) If this doesn’t make perfect sense, that’s totally fine! It didn’t make sense to me either, which is why I wanted to work through an example slowly with Shane. For us the action groupoid is where First let’s make sense of $A = r^* \text{Ker}(dt)$. We know that $dt : T(SL_2(\mathbb{R}) \times \mathbb{R}^2) \to T\mathbb{R}^2$, that is, $dt : TSL_2(\mathbb{R}) \times T\mathbb{R}^2 \to \mathbb{R}2$, and is the derivative of the map $t : (M,v) \mapsto v$. If we perturb a particular point $(M_0, v_0)$ to first order, say $(M_0 + \delta M, v_0 + \delta v)$, then projecting gives $v_0 + \delta v$, which gives $\delta v$ to first order. So the kernel of this map are all the pairs $(\delta M, \delta v)$ with $\delta v = 0$, and we learn where we’re identifying the zero section of $T\mathbb{R}^2$ with $\mathbb{R}^2$ itself. Now to get $A$ we’re supposed to apply $r^*$ to this. By definition, this means the fibre of $r^* \text{Ker}(dt)$ above the point $v$ is supposed to be the fibre of $\text{Ker}(dt)$ over $r(v) = (\text{id},v)$. But this fibre is \(T_{\text{id}}SL_2(\mathbb{R}) \times \{v\}\), so that the pullback is a trivial bundle with fibre $\mathfrak{sl}_2(\mathbb{R})$: viewed as a trivial fibre bundle over $\mathbb{R}^2$. (Recall that the lie algebra $\mathfrak{sl}_2(\mathbb{R})$ is defined to be the tangent space of $SL_2(\mathbb{R})$ at the identity). Next we want to compute the bracket \([-,-]_A\). Thankfully this is easy, since it’s the restriction of the bracket on $TSL_2(\mathbb{R}) \times T\mathbb{R}^2$. Of course, an element of $A$ comes from $T_\text{id}SL_2(\mathbb{R}) = \mathfrak{sl}_2(\mathbb{R})$ and the zero section of $T\mathbb{R}^2$, so we get the usual lie bracket on $\mathfrak{sl}_2(\mathbb{R})$ in the first component and the restriction of the bracket on $T\mathbb{R}^2$ to \(\{0\}\) in the second slot. This zero section piece isn’t doing anything interesting, and so after identifying this bundle with the trivial bundle $\mathfrak{sl}_2(\mathbb{R}) \times \mathbb{R}^2$ we see the bracket is just the usual bracket on $\mathfrak{sl}_2(\mathbb{R})$ taken fibrewise. Lastly, we want to compute the anchor map. This caused me and Shane a bit of trouble, since we need to compute $ds$ at the identity, and we realized neither of us knew a general approach for computing a chart for $SL_2(\mathbb{R})$ near $\text{id}$! In hindsight for a $k$ dimensional submanifold of $\mathbb{R}^n$ the idea is obvious: Just project onto some well-chosen $k$-subset of the usual coordinate directions! I especially wish it were easier to find some examples of this by googling, so I’m breaking it off into a sister blog post which will hopefully show up earlier in search results than this one will. The punchline for us was that $SL_2(\mathbb{R})$ is defined to be So a chart near a point $(a_0, b_0, c_0, d_0)$ can be computed by looking at the jacobian of $f : \mathbb{R}^4 \to \mathbb{R}$ with $f(a,b,c,d) = ad-bc$ evaluated at the matrix of interest $(a_0, b_0, c_0, d_0)$. Since $1$ is a regular value for $f$, the jacobian will have at least one nonzero entry, and by locally inverting that coordinate we’ll get our desired chart! Since we want a chart near the identity, we compute the jacobian of $ad-bc$ at the identity to be We see that $\frac{\partial f}{\partial a} \neq 0$, so that locally near \(\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\) the manifold looks like and this is our desired local chart! Now… Why were we doing this? We wanted to compute the anchor map from $A = r^* \text{Ker}(dt)$ to the tangent bundle $T\mathbb{R}^2$. This is supposed to be $ds$ (restricted to this subbundle). So how can we compute this? In the main body, I’ll make some identifications that make the presentation cleaner, and still show what’s going on. If you want a very very explicit version of this computation, take a look at this footnote1. Well $s : SL_2(\mathbb{R}) \times \mathbb{R}^2 \to \mathbb{R}^2$ is the map sending $(M,v) \mapsto Mv$. Explicitly, if we fix an $(x,y)$, this is the map From the previous discussion, we can write this as a map in local charts \(\{(a,b,c) \in \mathbb{R}^3 \mid a \neq 0 \} \to \mathbb{R}^2\) given by and now it’s very easy to compute $ds$. It’s just the jacobian but we only care about the value at the identity, since $A$ comes from pulling back this bundle along $r : v \mapsto (\text{id},v)$. So evaluating at $(a,b,c) = (1,0,0)$ we find our anchor map is Moreover, by differentiating \(\begin{pmatrix} a & b \\ c & \frac{1+bc}{a} \end{pmatrix}\) in the $a$, $b$, and $c$ directions and evaluating at $(1,0,0)$ we see that the basis $(\partial_a, \partial_b, \partial_c)$ for the tangent space to $(1,0,0)$ in our chart gets carried to the following basis for the tangent space at the identity matrix in $SL_2(\mathbb{R})$: which we recognize as $H$, $E$, and $F$ respectively. But this is great! Now we know that the lie algebroid of the action groupoid of $SL_2(\mathbb{R}) \curvearrowright \mathbb{R}^2$ is given by the trivial bundle $\mathfrak{sl}_2(\mathbb{R}) \times \mathbb{R}^2 \to \mathbb{R}^2$ with the usual bracket on $\mathfrak{sl}_2$ taken fibrewise, and the anchor map $\rho : \mathfrak{sl}_2 \times \mathbb{R}^2 \to T\mathbb{R}^2$ sending (in the fibre over $(x,y)$) $\rho_{(x,y)}(H) = (x, -y)$ $\rho_{(x,y)}(E) = (y, 0)$ $\rho_{(x,y)}(F) = (0, x)$ (which is the standard representation of $\mathfrak{sl}_2(\mathbb{R})$ on $\mathbb{R}^2$, viewed in a kind of bundle-y way.) Thanks for hanging out, all! It was fun to go back to my roots and write a post that’s “just” a computation. This felt tricky while Shane and I were doing it together, but writing it up now it’s starting to feel a lot simpler. There’s still some details I don’t totally understand, which I think will be cleared up by just doing more computations like this, haha. Also, sorry for not spending a lot of time motivating lie algebroids or actually doing something with the result of this computation… I actually don’t totally know what we can do with lie algebroids either! This was just a fun computation I did with a friend, trusting that he has good reasons to care. I’ve been meaning to pester him into guest-writing a blog post (or very confidently holding my hand while I write the blog post) about lie algebroids and why you should care. As I understand it, they give you tools for studying PDEs on manifolds which have certain mild singularities. This is super interesting, and obviously useful, and so I’d love to spend the time to better understand what’s going on. Stay safe, and if you’re anywhere like Riverside try to stay cool! If you want to be super duper explicit, our function $s$ sends $SL_2(\mathbb{R}) \times \mathbb{R}^2 \to \mathbb{R}^2$, which in a chart around the identity looks like the function given by now we differentiate to get $ds : TSL_2(\mathbb{R}) \times T\mathbb{R}^2 \to T\mathbb{R}^2$ sending $(a,b,c,\delta a,\delta b,\delta c, x, y, \delta x, \delta y)$ to $(ax+by, cx+\frac{1+bc}{a}y, ?, ?)$ Where the two $?$s are the output of matrix multiplication against the jacobian of $s$: Then we’re supposed to restrict this to $\text{Ker}(dt)$, which are the points $(a,b,c, \delta a, \delta b, \delta c, x, y, 0, 0)$. Since $\delta x = \delta y = 0$, we don’t even bother writing those entries of the matrix, and that’s how we get $ds$ as written in the main body. Now, as in the main body, we pull this bundle back along $r : v \mapsto (\text{id},v)$, which in our chart is $(x,y) \mapsto (1,0,0,x,y)$ so that our bundle $A$ (with its structure map to $\mathbb{R}^2$) is which, in the main text, we identify with \(\mathfrak{sl}_2 \times \mathbb{R}^2 = \{(\delta a, \delta b, \delta c, x, y)\} \to \{(x,y)\} = \mathbb{R}^2\) so we learn that our anchor map $ds$ is the restriction of the above map $ds$ to this pulled back subbundle, and sends where, again the $?$s are the result of the matrix multiplication which brings us back to the result of the main body. Of course, most working differential geometers wouldn’t write out this much detail to do this computation! I think it might be helpful to some newcomers to the field, and I certainly found it clarifying to write down exactly what happened, even if Shane and I weren’t nearly this careful when we were doing this together at a whiteboard. ↩

2 days ago 2 votes