More from Stephen Wolfram Writings
Metaengineering and Laws of Innovation Things are invented. Things are discovered. And somehow there’s an arc of progress that’s formed. But are there what amount to “laws of innovation” that govern that arc of progress? There are some exponential and other laws that purport to at least measure overall quantitative aspects of progress (number of […]
A Theory of Medicine? As it’s practiced today, medicine is almost always about particulars: “this has gone wrong; this is how to fix it”. But might it also be possible to talk about medicine in a more general, more abstract way—and perhaps to create a framework in which one can study its essential features without […]
The Drumbeat of Releases Continues… Notebook Assistant Chat inside Any Notebook Bring Us Your Gigabytes! Introducing Tabular Manipulating Data in Tabular Getting Data into Tabular Cleaning Data for Tabular The Structure of Tabular Tabular Everywhere Algebra with Symbolic Arrays Language Tune-Ups Brightening Our Colors; Spiffing Up for 2025 LLM Streamlining & Streaming Streamlining Parallel Computation: […]
Note: As of today, copies of Wolfram Version 14.1 are being auto-updated to allow subscription access to the capabilities described here. [For additional installation information see here.] Just Say What You Want! Turning Words into Computation Nearly a year and a half ago—just a few months after ChatGPT burst on the scene—we introduced the first […]
More in science
As usual, I hope to write more about particular physics topics soon, but in the meantime I wanted to share a sampling of news items: First, it's a pleasure to see new long-form writing about condensed matter subjects, in an era where science blogging has unquestionably shrunk compared to its heyday. The new Quantum Matters substack by Justin Wilson (and William Shelton) looks like it will be a fun place to visit often. Similar in spirit, I've also just learned about the Knowmads podcast (here on youtube), put out by Prachi Garella and Bhavay Tyagi, two doctoral students at the University of Houston. Fun Interviews with interesting scientists about their science and how they get it done. There have been some additional news bits relevant to the present research funding/university-govt relations mess. Earlier this week, 200 business leaders published an open letter about how the slashing support for university research will seriously harm US economic competitiveness. More of this, please. I continue to be surprised by how quiet technology-related, pharma, and finance companies are being, at least in public. Crushing US science and engineering university research will lead to serious personnel and IP shortages down the line, definitely poor for US standing. Again, now is the time to push back on legislators about cuts mooted in the presidential budget request. The would-be 15% indirect cost rate at NSF has been found to be illegal, in a summary court judgment released yesterday. (Brief article here, pdf of the ruling here.) Along these lines, there are continued efforts for proposals about how to reform/alter indirect cost rates in a far less draconian manner. These are backed by collective organizations like the AAU and COGR. If you're interested in this, please go here, read the ideas, and give some feedback. (Note for future reference: the Joint Associations Group (JAG) may want to re-think their acronym. In local slang where I grew up, the word "jag" does not have pleasant connotations.) The punitive attempt to prevent Harvard from taking international students has also been stopped for now in the courts.
Earlier this week my friend Shane and I took a day and just did a bunch of computations. In the morning we did some differential geometry, where he told me some things about what he’s doing with symplectic lie algebroids. We went to get lunch, and then in the afternoon we did some computations in derived algebraic geometry. I already wrote a blog post on the differential geometry, and now I want to write one on the derived stuff too! I’m faaaaar from an expert in this stuff, and I’m sure there’s lots of connections I could make to other subjects, or interesting general theorem statements which have these computations as special cases… Unfortunately, I don’t know enough to do that, so I’ll have to come back some day and write more blog posts once I know more! I’ve been interested in derived geometry for a long time now, and I’ve been sloooowly chipping away at the prerequisites – $\infty$-categories and model categories, especially via dg-things, “classical” algebraic geometry (via schemes), and of course commutative and homological algebra. I’m lucky that a lot of these topics have also been useful in my thesis work on fukaya categories and TQFTs, which has made the time spent on them easy to justify! I’ve just started reading a book which I hope will bring all these ideas together – Towards the Mathematics of Quantum Field Theory by Frédéric Paugam. It seems intense, but at least on paper I know a lot of the things he’s planning to talk about, and I’m hoping it makes good on its promise to apply its techniques to “numerous examples”. If it does, I’m sure it’ll help me understand things better so I can share them here ^_^. In this post we’ll do two simple computations. In both cases we have a family of curves where something weird happens at a point, and in the “clasical” case this weirdness manifests as a discontinuity in some invariant. But by working with a derived version of the invariant we’ll see that at most points the classical story and the derived story agree, but at the weird point the derived story contains ~bonus information~ that renders the invariant continuous after all! Ok, let’s actually see this in action! First let’s look at what happens when we intersect two lines through the origin. This is the example given in Paugam’s book that made me start thinking about this stuff again. Let’s intersect the $x$-axis (the line $y=0$) with the line $y=mx$ as we vary $m$. This amounts to looking at the schemes $\text{Spec}(k[x,y] \big / y)$ and $\text{Spec}(k[x,y] \big / y-mx)$. Their intersection is the pullback and so since everything in sight is affine, we can compute this pullback in $\text{Aff} = \text{CRing}^\text{op}$ as a pushout in $\text{CRing}$: Pushouts in $\text{CRing}$ are given by (relative) tensor products, and so we compute which is $k$ when $m \neq 0$ and is $k[x]$ when $m=0$, so we’ve learned that: When $m \neq 0$ the intersection of \(\{y=0\}\) and \(\{y=mx\}\) is $\text{Spec}(k)$ – a single point1. When $m = 0$ the intersection is $\text{Spec}(k[x])$ – the whole $x$-axis. This is, of course, not surprising at all! We didn’t really need any commutative algebra for this, since we can just look at it! The fact that the dimension of the intersection jumps suddenly is related to the lack of flatness in the family of intersections $k[x,y,m] \big /(y, y-mx) \to k[m]$. Indeed, this doesn’t look like a very flat family! We can also see it isn’t flat algebraically since tensoring with $k[x,y,m] \big / (y, y-mx)$ doesn’t preserve the exact sequence2 In the derived world, though, things are better. It’s my impression that here flatness is a condition guaranteeing the “naive” underived computation agrees with the “correct” derived computation. That is, flat modules $M$ are those for which $M \otimes^\mathbb{L} -$ and $M \otimes -$ agree for all modules $X$! I think that one of the benefits of the derived world is that we can pretend like “all families are flat”. I would love if someone who knows more about this could chime in, though, since I’m not confident enough to really stand by that. In our particular example, though, this is definitely true! To see this we need to compute the derived tensor product of $k[x,y] \big / (y)$ and $k[x,y] \big / (y-mx)$ as $k[x,y]$-algebras. To do this we need to know the right notion of “projective resolution” (it’s probably better to say cofibrant replacement), and we can build these from (retracts of) semifree commutative dg algebras in much the same way we build projective resolutions from free things! Here “semifree” means that our algebra is a free commutative graded algebra if we forget about the differential. Of course, “commutative” here is in the graded sense that $xy = (-1)^{\text{deg}(x) \text{deg}(y)}yx$. For example, if we work over the base field $k$, then the free commutative graded algebra on $x_0$ (by which I mean an element $x$ living in degree $0$) is just the polynomial algebra $k[x]$ all concentrated in degree $0$. Formally, we have elements $1, \ x_0, \ x_0 \otimes x_0, \ x_0 \otimes x_0 \otimes x_0, \ldots$, and the degree of a tensor is the sum of the degrees of the things we’re tensoring, so that for $x_0$ the whole algebra ends up concentrated in degree $0$. If we look at the free graded $k$-algebra on $x_1$, we again get an algebra generated by $x_1, \ x_1 \otimes x_1, \ x_1 \otimes x_1 \otimes x_1, \ldots$ except that now we have the anticommutativity relation $x_1 \otimes x_1 = (-1)^{1 \cdot 1} x_1 \otimes x_1$ so that $x_1 \otimes x_1 = 0$. This means the free graded $k$-algebra on $x_1$ is just the algebra with $k$ in degree $0$, the vector space generated by $x$ in degree $1$, and the stipulation that $x^2 = 0$. In general, elements in even degrees contribute symmetric algebras and elements in odd degrees contribute exterior algebras to the cga we’re freely generating. What does this mean for our example? We want to compute the derived tensor product of $k[x,y] \big / y$ and $k[x,y] \big / y-mx$. As is typical in homological algebra, all we need to do is “resolve” one of our algebras and then take the usual tensor product of chain complexes. Here a resolution means we want a semifree cdga which is quasi-equivalent to the algebra we started with, and it’s easy to find one! Consider the cdga $k[x,y,e]$ where $x,y$ live in degree $0$ and $e$ lives in degree $1$. The differential sends $de = y$, and must send everything else to $0$ by degree considerations (there’s nothing in degree $-1$). This cdga is semifree as a $k[x,y]$-algebra, since if you forget the differential it’s just the free graded $k[x,y]$ algebra on a degree 1 generator $e$! So this corresponds to the chain complex where $de = y$ is $k[x,y]$ linear so that more generally $d(pe) = p(de) = py$ for any polynomial $p \in k[x,y]$. If we tensor this (over $k[x,y]$) with $k[x,y] \big / y-mx$ (concentrated in degree $0$) we get a new complex where the interesting differential sends $pe \mapsto ey$ for any polynomial $p \in k[x,y] \big / y-mx$. Some simplification gives the complex whose homology is particularly easy to compute! $H_0 = k[x] \big / mx$ $H_1 = \text{Ker}(mx)$ We note that $H_0$ recovers our previous computation, where when $m \neq 0$ we have $H_0 = k$ is the coordinate ring of the origin3 and when $m=0$ we have $H_0 = k[x]$ is the coordinate ring of the $x$-axis. However, now there’s more information stored in $H_1$! In the generic case where $m \neq 0$, the differential $mx$ is injective so that $H_1$ vanishes, and our old “classical” computation saw everything there is to see. It’s not until we get to the singular case where $m=0$ that we see $H_1 = \text{Ker}(mx)$ becomes the kernel of the $0$-map, which is all of $k[x]$! The version of “dimension” for chain complexes which is invariant under quasi-isomorphism is the euler characteristic, and we see that now the euler characteristic is constantly $0$ for the whole family! Next let’s look at some kind of “hidden smoothness” by examining the singular curve $y^2 =x^3$. Just for fun, let’s look at another family of (affine) curves $y^2 = x^3 + tx$, which are smooth whenever $t \neq 0$. We’ll again show that in the derived world the singular fibre looks more like the smooth fibres. Smoothness away from the $t=0$ fibre is an easy computation, since we compute the jacobian of the defining equation $y^2 - x^3 - tx$ to be $\langle -3x^2 - t, 2y \rangle$, and for $t \neq 0$ this is never $\langle 0, 0 \rangle$ for any point on our curve4 (We’ll work in characteristic 0 for safety). Of course, when $t=0$ $\langle -3x^2, 2y \rangle$ vanishes at origin, so that it has a singular point there. To see the singularity, let’s compute the tangent space at $(0,0)$ for every curve in this family. We’ll do that by computing the space of maps from the “walking tangent vector” $\text{Spec}(k[\epsilon] \big / \epsilon^2)$ to our curve which deform the map from $\text{Spec}(k)$ to our curve representing our point of interest $(0,0)$. Since everything is affine, we turn the arrows around and see we want to compute the space of algebra homs so that the composition with the map $k[\epsilon] \big / \epsilon^2 \to k$ sending $\epsilon \mapsto 0$ becomes the map $k[x,y] \big / (y^2 - x^3 - tx) \to k$ sending $x$ and $y$ to $0$. Since $k[x,y] \big / (y^2 - x^3 - tx)$ is a quotient of a free algebra, this is easy to do! We just consult the universal property, and we find a hom $k[x,y] \big / (y^2 - x^3 - tx) \to k[\epsilon] \big / \epsilon^2$ is just a choice of image $a+b\epsilon$ for $x$ and $c+d\epsilon$ for $y$, so that the equation $y^2 - x^3 - tx$ is “preserved” in the sense that $(c+d\epsilon)^2 - (a+b\epsilon)^3 - t(a+b\epsilon)$ is $0$ in $k[\epsilon] \big / \epsilon^2$. Then the “deforming the origin” condition says that moreover when we set $\epsilon = 0$ our composite has to send $x$ and $y$ to $0$. Concretely that means we must choose $a=c=0$ in the above expression, so that finally: The tangent space at the origin of $k[x,y] \big / (y^2 - x^3 - tx)$ is the space of pairs $(b,d)$ so that $(d \epsilon)^2 - (b \epsilon)^3 - t(b \epsilon) = 0$ in $k[\epsilon] \big / \epsilon^2$. Of course, this condition holds if and only if $tb=0$, so that: When $t \neq 0$ the tangent space is the space of pairs $(b,d)$ with $b=0$, which is one dimensional. When $t = 0$ the tangent space is the space of pairs $(b,d)$ with no further conditions, which is two dimensional! Since we’re looking at curve, we expect the tangent space to be $1$-dimensional, and this is why we say there’s a singularity at the origin for the curve $y^2 = x^3$….. But what happens in the derived world? Now we want to compute the derived homspace. As before, a cofibrant replacement of our algebra is easy to find, it’s just $k[x,y,e]$ where $x$ and $y$ have degree $0$, $e$ has degree $1$ and and $de = y^2 - x^3 - tx$. Note that in our last example we were looking at quasifree $k[x,y]$-algebras, but now we just want $k$-algebras! So now this is the free graded $k$-algebra on 3 generators $x,y,e$, and our chain complex is: We want to compute the derived $\text{Hom}^\bullet(-,-)$ from this algebra to $k[\epsilon] \big / \epsilon^2$, concentrated in degree $0$. The degree $0$ maps are given by diagrams that don’t need to commute5! Of course, such maps are given by pairs $(a + b \epsilon, c + d \epsilon)$, which are the images of $x$ and $y$. As before, since we want the tangent space at $(0,0)$ we need to restrict to those pairs with $a=c=0$ so that $\text{Hom}^0(k[x,y] \big / y^2 - x^3 - tx, \ k[\epsilon] \big / \epsilon^2) = k^2$, generated by the pairs $(b,d)$. Next we look at degree $-1$ maps, which are given by diagrams which are given by a pair $r + s\epsilon$, the image of $e$. Again, these need to restrict to the $0$ map when we set $\epsilon=0$, so that $r=0$ and we compute $\text{Hom}^{-1}(k[x,y] \big / y^2 - x^3 - tx, \ k[\epsilon] \big / \epsilon^2) = k$, generated by $s$. So our hom complex is where the interesting differential sends degree $0$ to degree $-1$ and is given by $df = d_{k[\epsilon] \big / \epsilon^2} \circ f - f \circ d_{k[x,y] \big / y^2-x^3-tx}$. So if $f$ is the function sending $x \mapsto b \epsilon$ and $y \mapsto d \epsilon$ then we compute So phrased purely in terms of vector spaces we see our hom complex is (living in degrees $0$ and $-1$): So we compute $H^0 = \text{Ker} ((-t \ 0))$ $H^{-1} = \langle s \rangle \big / \text{Im}((-t \ 0))$ When $t \neq 0$, our map is full rank so that $H^0$ are the pairs $(b,d)$ with $b=0$ – agreeing with the classical computation. Then $H^{-1} = 0$, so again we learn nothing new in the smooth fibres. When $t=0$, however, our map is the $0$ map so that $H^0$ is the space of all pairs $(b,d)$ is two dimensional – again, agreeing with the classical computation! But now we see the $H^{-1}$ term, which is $1$ dimensional, spanned by $s$. Again, in the derived world, we see the euler characteristic is constantly $1$ along the whole family! There’s something a bit confusing here, since there seem to be two definitions of “homotopical smoothness”… On the one hand, in the noncommutative geometry literature, we say that a dga $A$ is “smooth” if it’s perfect as a bimodule over itself. On the other hand, though, I’ve also heard another notion of “homotopically smooth” where we say the cotangent complex is perfect. I guess it’s possible (likely?) that these should be closely related by some kind of HKR Theorem, but I don’t know the details. Anyways, I’m confused because we just computed that the curve $y^2 = x^3$ has a perfect tangent complex, which naively would make me think its cotangent complex is also perfect. But this shouldn’t be the case, since I also remember reading that a classical scheme is smooth in the sense of noncommutative geometry if and only if it’s smooth classically, which $y^2 = x^3$ obviously isn’t! Now that I’ve written these paragarphs and thought harder about things, I think I was too quick to move between perfectness of the tangent complex and perfectness of the cotangent complex, but I should probably compute the cotangent complex and the bimodule resolution to be sure… Unfortunately, that will have to wait for another day! I’ve spent probably too many hours over the last few days writing this and my other posts on lie algebroids. I have some kind of annoying hall algebra computations that are calling my name, and I have an idea about a new family of model categories which might be of interest… But checking that something is a model category is usually hard, so I’ve been dragging my feet a little bit. Plus, I need to start packing soon! I’m going to europe for a bunch of conferences in a row! First a noncommutative geometry summer school hosted by the institute formerly known as MSRI, then CT of course, and lastly a cute representation theory conference in Bonn. I’m sure I’ll learn a bunch of stuff I’ll want to talk about, so we’ll chat soon ^_^. Take care, all! In fact we know more! This $k$ is really $k[x,y] \big / (x=0,y=0)$, so we know the intersection point is $(0,0)$. ↩ Indeed after tensoring we get since here $k \cong k[m] \big / (m)$. But then we can simplify these to and indeed the leftmost map (multiplication by $m$) is not injective! The kernel is generated by $x$. ↩ Again, if you’re more careful with where this ring comes from, rather than just its isomorphism class, it’s $k[x,y] \big / (x,y)$, the quotient by the maximal ideal $(x,y)$ which represents the origin. ↩ The only place it could possibly be $\langle 0, 0 \rangle$ is when $y=0$, but the points on our curve with this property satisfy $x^3-tx=y^2=0$ so that when $t \neq 0$ the solutions are $(x,y) = (0,0), \ (\sqrt{t}, 0), \ (-\sqrt{t}, 0)$. But at all three of these points $\langle -3x^2 - t, 2y \rangle \neq \langle 0, 0 \rangle$. ↩ This is a misconception that I used to have, and which basically everyone I’ve talked to had at one point. Remember that dg-maps are all graded maps! Not just those which commute with the differential! The key point is that the differential on $\text{Hom}^\bullet(A,B)$ sends a degree $n$ map $f$ to so that $df = 0$ if and only if $d_B f = (-1)^n f d_A$ if and only if $f$ commutes with the differential (in the appropriate graded sense). This means that, for instance, $H^0$ of the hom complex recovers from all graded maps exactly $\text{Ker}(d) \big / \text{Im}(d)$, which are the maps commuting with $d$ modulo chain homotopy! ↩
Two new notions of infinity challenge a long-standing plan to define the mathematical universe. The post Is Mathematics Mostly Chaos or Mostly Order? first appeared on Quanta Magazine
In the aftermath of a massive blackout that hit Spain and Portugal in April, some pundits were quick to blame wind and solar for the loss of power. But official inquiries have found that a shortfall in conventional power led to the outages. Read more on E360 →
This is going to be a very classic post, where we’ll chat about a computation my friend Shane did earlier today. His research is largely about symplectic lie algebroids, and recently we’ve been trying to understand the rich connections between poisson geometry, lie algebroids, lie groupoids, and eventually maybe fukaya categories of lie groupoids (following some ideas of Pascaleff). Shane knows much more about this stuff than I do, so earlier today he helped me compute a super concrete example. We got stuck at some interesting places along the way, and I think it’ll be fun to write this up, since I haven’t seen these kinds of examples written down in many places. Let’s get started! First, let’s recall what an action groupoid is. This is one of the main examples I have in my head for lie groupoids, which is why Shane and I started here. If $G$ is a group acting on a set $X$, then we get a groupoid: here we think of $X$ as the set of objects and $G \times X$ as the set of arrows, where the source of $(g,x)$ is just $x$ the target of $(g,x)$ is $g \cdot x$, using the action of $G$ on $X$ the identity arrow at $x$ is $(1,x)$ if $(g,x) : x \to y$ and $(h,y) : y \to z$, then the composite is $(hg,x) : x \to z$ if $(g,x) : x \to y$ then its inverse is $(g^{-1},y) : y \to x$. Action groupoids are interesting and important because they allow us to work with “stacky” nonhausdorff quotient spaces like orbifolds in a very fluent way. See, for instance, Moerdijk’s Orbifolds as Groupoids: An Introduction, which shows how you can easily define covering spaces, vector bundles, principal bundles, etc. on orbifolds using the framework of groupoids. The point is that a groupoid $E \rightrightarrows X$ is a “proof relevant equivalence relation” in the sense that $E$ keeps track of all the “proofs” or “witnesses” that two points in $X$ are identified, rather than just the statement that two points are identified. Indeed, we think of $e \in E$ as a witness identifying $s(e)$ and $t(e)$ in $X$. Then reflexivity comes from the identity arrow, symmetry comes from the inverse, and transitivty comes from composition. The “proof relevance” is just the statement that there might be multiple elements $e_1$ and $e_2$ which both identify $x$ and $y$ (that is, $s(e_1) = s(e_2) = x$ and $t(e_1) = t(e_2) = y$). By keeping track of this extra information that “$x$ and $y$ are related in multiple ways” we’re able to work with a smooth object (in the sense that both $E$ and $X$ are smooth) instead of the nonsmooth, or even nonhausdorff quotient $X/E$. Next, we know that a central part of the study of any lie group $G$ is its lie algebra $\mathfrak{g}$. This is a “linearization” of $G$ in the literal sense that it’s a vector space instead of a manifold, which makes it much easier to study. But through the lie bracket $[-,-]$ it remembers enough of the structure of $G$ to make it an indispensable tool for understanding $G$. See, for instance, Bump’s excellent book Lie Groups or Stillwell’s excellent Naive Lie Theory. With this in mind, just playing linguistic games we might ask if there’s a similarly central notion of “lie algebroid” attached to any “lie groupoid”. The answer turns out to be yes, but unlike the classical case, not every lie algebroid comes from a lie groupoid! We say that not every lie algebroid is integrable. This, finally, brings us to the computation that Shane and I did together: As explicitly as possible, let’s compute the lie algebroid coming from the action groupoid of $SL_2(\mathbb{R}) \curvearrowright \mathbb{R}^2$. Let’s start with a few definitions coming from Crainic, Fernandes, and Mărcuț’s Lectures on Poisson Geometry. A Lie Algebroid on a manifold $M$ is a vector bundle $A \to M$ whose space of global sections $\Gamma(A)$ has a lie bracket $[-,-]_A$, equipped with an anchor map $\rho : A \to TM$ to the tangent bundle of $M$ that’s compatible with the lie bracket in the sense that for $\alpha, \beta \in \Gamma(A)$ and $f \in C^\infty(M)$ we have \([\alpha, f \cdot \beta]_A = f \cdot [\alpha,\beta]_A + (\rho(\alpha) f) \cdot \beta\) Then, given a lie groupoid $E \rightrightarrows M$ with source and target maps $s,t : E \to M$ and identity map $r : M \to E$, its lie algebroid is explicitly given by letting $A = r^* \text{Ker}(dt)$, which is a vector bundle over $M$ $[-,-]_A$ coming from the usual bracket on $TE$ (since $\text{Ker}(dt)$ is a subbundle of $TE$) $\rho : A \to TM$ given by $ds$ (which is a map $TE \to TM$, thus restricts to a map on $\text{Ker}(dt)$, and so gives a map on the pullback $A = r^* \text{Ker}(dt)$) If this doesn’t make perfect sense, that’s totally fine! It didn’t make sense to me either, which is why I wanted to work through an example slowly with Shane. For us the action groupoid is where First let’s make sense of $A = r^* \text{Ker}(dt)$. We know that $dt : T(SL_2(\mathbb{R}) \times \mathbb{R}^2) \to T\mathbb{R}^2$, that is, $dt : TSL_2(\mathbb{R}) \times T\mathbb{R}^2 \to \mathbb{R}2$, and is the derivative of the map $t : (M,v) \mapsto v$. If we perturb a particular point $(M_0, v_0)$ to first order, say $(M_0 + \delta M, v_0 + \delta v)$, then projecting gives $v_0 + \delta v$, which gives $\delta v$ to first order. So the kernel of this map are all the pairs $(\delta M, \delta v)$ with $\delta v = 0$, and we learn where we’re identifying the zero section of $T\mathbb{R}^2$ with $\mathbb{R}^2$ itself. Now to get $A$ we’re supposed to apply $r^*$ to this. By definition, this means the fibre of $r^* \text{Ker}(dt)$ above the point $v$ is supposed to be the fibre of $\text{Ker}(dt)$ over $r(v) = (\text{id},v)$. But this fibre is \(T_{\text{id}}SL_2(\mathbb{R}) \times \{v\}\), so that the pullback is a trivial bundle with fibre $\mathfrak{sl}_2(\mathbb{R})$: viewed as a trivial fibre bundle over $\mathbb{R}^2$. (Recall that the lie algebra $\mathfrak{sl}_2(\mathbb{R})$ is defined to be the tangent space of $SL_2(\mathbb{R})$ at the identity). Next we want to compute the bracket \([-,-]_A\). Thankfully this is easy, since it’s the restriction of the bracket on $TSL_2(\mathbb{R}) \times T\mathbb{R}^2$. Of course, an element of $A$ comes from $T_\text{id}SL_2(\mathbb{R}) = \mathfrak{sl}_2(\mathbb{R})$ and the zero section of $T\mathbb{R}^2$, so we get the usual lie bracket on $\mathfrak{sl}_2(\mathbb{R})$ in the first component and the restriction of the bracket on $T\mathbb{R}^2$ to \(\{0\}\) in the second slot. This zero section piece isn’t doing anything interesting, and so after identifying this bundle with the trivial bundle $\mathfrak{sl}_2(\mathbb{R}) \times \mathbb{R}^2$ we see the bracket is just the usual bracket on $\mathfrak{sl}_2(\mathbb{R})$ taken fibrewise. Lastly, we want to compute the anchor map. This caused me and Shane a bit of trouble, since we need to compute $ds$ at the identity, and we realized neither of us knew a general approach for computing a chart for $SL_2(\mathbb{R})$ near $\text{id}$! In hindsight for a $k$ dimensional submanifold of $\mathbb{R}^n$ the idea is obvious: Just project onto some well-chosen $k$-subset of the usual coordinate directions! I especially wish it were easier to find some examples of this by googling, so I’m breaking it off into a sister blog post which will hopefully show up earlier in search results than this one will. The punchline for us was that $SL_2(\mathbb{R})$ is defined to be So a chart near a point $(a_0, b_0, c_0, d_0)$ can be computed by looking at the jacobian of $f : \mathbb{R}^4 \to \mathbb{R}$ with $f(a,b,c,d) = ad-bc$ evaluated at the matrix of interest $(a_0, b_0, c_0, d_0)$. Since $1$ is a regular value for $f$, the jacobian will have at least one nonzero entry, and by locally inverting that coordinate we’ll get our desired chart! Since we want a chart near the identity, we compute the jacobian of $ad-bc$ at the identity to be We see that $\frac{\partial f}{\partial a} \neq 0$, so that locally near \(\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\) the manifold looks like and this is our desired local chart! Now… Why were we doing this? We wanted to compute the anchor map from $A = r^* \text{Ker}(dt)$ to the tangent bundle $T\mathbb{R}^2$. This is supposed to be $ds$ (restricted to this subbundle). So how can we compute this? In the main body, I’ll make some identifications that make the presentation cleaner, and still show what’s going on. If you want a very very explicit version of this computation, take a look at this footnote1. Well $s : SL_2(\mathbb{R}) \times \mathbb{R}^2 \to \mathbb{R}^2$ is the map sending $(M,v) \mapsto Mv$. Explicitly, if we fix an $(x,y)$, this is the map From the previous discussion, we can write this as a map in local charts \(\{(a,b,c) \in \mathbb{R}^3 \mid a \neq 0 \} \to \mathbb{R}^2\) given by and now it’s very easy to compute $ds$. It’s just the jacobian but we only care about the value at the identity, since $A$ comes from pulling back this bundle along $r : v \mapsto (\text{id},v)$. So evaluating at $(a,b,c) = (1,0,0)$ we find our anchor map is Moreover, by differentiating \(\begin{pmatrix} a & b \\ c & \frac{1+bc}{a} \end{pmatrix}\) in the $a$, $b$, and $c$ directions and evaluating at $(1,0,0)$ we see that the basis $(\partial_a, \partial_b, \partial_c)$ for the tangent space to $(1,0,0)$ in our chart gets carried to the following basis for the tangent space at the identity matrix in $SL_2(\mathbb{R})$: which we recognize as $H$, $E$, and $F$ respectively. But this is great! Now we know that the lie algebroid of the action groupoid of $SL_2(\mathbb{R}) \curvearrowright \mathbb{R}^2$ is given by the trivial bundle $\mathfrak{sl}_2(\mathbb{R}) \times \mathbb{R}^2 \to \mathbb{R}^2$ with the usual bracket on $\mathfrak{sl}_2$ taken fibrewise, and the anchor map $\rho : \mathfrak{sl}_2 \times \mathbb{R}^2 \to T\mathbb{R}^2$ sending (in the fibre over $(x,y)$) $\rho_{(x,y)}(H) = (x, -y)$ $\rho_{(x,y)}(E) = (y, 0)$ $\rho_{(x,y)}(F) = (0, x)$ (which is the standard representation of $\mathfrak{sl}_2(\mathbb{R})$ on $\mathbb{R}^2$, viewed in a kind of bundle-y way.) Thanks for hanging out, all! It was fun to go back to my roots and write a post that’s “just” a computation. This felt tricky while Shane and I were doing it together, but writing it up now it’s starting to feel a lot simpler. There’s still some details I don’t totally understand, which I think will be cleared up by just doing more computations like this, haha. Also, sorry for not spending a lot of time motivating lie algebroids or actually doing something with the result of this computation… I actually don’t totally know what we can do with lie algebroids either! This was just a fun computation I did with a friend, trusting that he has good reasons to care. I’ve been meaning to pester him into guest-writing a blog post (or very confidently holding my hand while I write the blog post) about lie algebroids and why you should care. As I understand it, they give you tools for studying PDEs on manifolds which have certain mild singularities. This is super interesting, and obviously useful, and so I’d love to spend the time to better understand what’s going on. Stay safe, and if you’re anywhere like Riverside try to stay cool! If you want to be super duper explicit, our function $s$ sends $SL_2(\mathbb{R}) \times \mathbb{R}^2 \to \mathbb{R}^2$, which in a chart around the identity looks like the function given by now we differentiate to get $ds : TSL_2(\mathbb{R}) \times T\mathbb{R}^2 \to T\mathbb{R}^2$ sending $(a,b,c,\delta a,\delta b,\delta c, x, y, \delta x, \delta y)$ to $(ax+by, cx+\frac{1+bc}{a}y, ?, ?)$ Where the two $?$s are the output of matrix multiplication against the jacobian of $s$: Then we’re supposed to restrict this to $\text{Ker}(dt)$, which are the points $(a,b,c, \delta a, \delta b, \delta c, x, y, 0, 0)$. Since $\delta x = \delta y = 0$, we don’t even bother writing those entries of the matrix, and that’s how we get $ds$ as written in the main body. Now, as in the main body, we pull this bundle back along $r : v \mapsto (\text{id},v)$, which in our chart is $(x,y) \mapsto (1,0,0,x,y)$ so that our bundle $A$ (with its structure map to $\mathbb{R}^2$) is which, in the main text, we identify with \(\mathfrak{sl}_2 \times \mathbb{R}^2 = \{(\delta a, \delta b, \delta c, x, y)\} \to \{(x,y)\} = \mathbb{R}^2\) so we learn that our anchor map $ds$ is the restriction of the above map $ds$ to this pulled back subbundle, and sends where, again the $?$s are the result of the matrix multiplication which brings us back to the result of the main body. Of course, most working differential geometers wouldn’t write out this much detail to do this computation! I think it might be helpful to some newcomers to the field, and I certainly found it clarifying to write down exactly what happened, even if Shane and I weren’t nearly this careful when we were doing this together at a whiteboard. ↩