More from nanoscale views
I saw a couple of interesting talks this morning before heading out: Alessandro Chiesa of Parma spoke about using spin-containing molecules potentially as qubits, and about chiral-induced spin selectivity (CISS) in electron transfer. Regarding the former, here is a review. Spin-containing molecules can have interesting properties as single qubits, or, for spins higher than 1/2, qudits, with unpaired electrons often confined to a transition metal or rare earth ion somewhat protected from the rest of the universe by the rest of the molecule. The result can be very long coherence times for their spins. Doing multi-qubit operations is very challenging with such building blocks, however. There are some theory proposals and attempts to couple molecular qubits to superconducting resonators, but it's tough! Regarding chiral induced spin selectivity, he discused recent work trying to use molecules where a donor region is linked to an acceptor region via a chiral bridge, and trying to manipulate spin centers this way. A question in all the CISS work is, how can the effects be large when spin-orbit coupling is generally very weak in light, organic molecules? He has a recent treatment of this, arguing that if one models the bridge as a chain of sites with large \(U/t\), where \(U\) is the on-site repulsion energy and \(t\) is the hopping contribution, then exchange processes between sites can effectively amplify the otherwise weak spin-orbit effects. I need to read and think more about this. Richard Schlitz of Konstanz gave a nice talk about some pretty recent research using a scanning tunneling microscope tip (with magnetic iron atoms on the end) to drive electron paramagnetic resonance in a single pentacene molecule (sitting on MgO on Ag, where it tends to grab an electron from the silver and host a spin). The experimental approach was initially explained here. The actual polarized tunneling current can drive the resonance, and exactly how depends on the bias conditions. At high bias, when there is strong resonant tunneling, the current exerts a damping-like torque, while at low bias, when tunneling is far off resonance, the current exerts a field-like torque. Neat stuff. Leah Weiss from Chicago gave a clear presentation about not-yet-published results (based on earlier work), doing optically detected EPR of Er-containing molecules. These condense into mm-sized molecular crystals, with the molecular environment being nice and clean, leading to very little inhomogeneous broadening of the lines. There are spin-selective transitions that can be driven using near telecom-wavelength (1.55 \(\mu m\)) light. When the (anisotropic) \(g\)-factors of the different levels are different, there are some very promising ways to do orientation-selective and spin-selective spectroscopy. Looking forward to seeing the paper on this. And that's it for me for the meeting. A couple of thoughts: I'm not sold on the combined March/April meeting. Six years ago when I was a DCMP member-at-large, the discussion was all about how the March Meeting was too big, making it hard to find and get good deals on host sites, and maybe the meeting should split. Now they've made it even bigger. Doesn't this make planning more difficult and hosting more expensive since there are fewer options? (I'm not an economist, but....) A benefit for the April meeting attendees is that grad students and postdocs get access to the career/networking events held at the MM. If you're going to do the combination, then it seems like you should have the courage of your convictions and really mingle the two, rather than keeping the March talks in the convention center and the April talks in site hotels. I understand that van der Waals/twisted materials are great laboratories for physics, and that topological states in these are exciting. Still, by my count there were 7 invited sessions broadly about this topic, and 35 invited talks on this over four days seems a bit extreme. By my count, there were eight dilution refrigerator vendors at the exhibition (Maybell, Bluefors, Ice, Oxford, Danaher/Leiden, Formfactor, Zero-Point Cryo, and Quantum Design if you count their PPMS insert). Wow. I'm sure there will be other cool results presented today and tomorrow that I am missing - feel free to mention them in the comments.
Another busy day at the APS Global Physics Summit. Here are a few highlights: Shahal Ilani of the Weizmann gave an absolutely fantastic talk about his group's latest results from their quantum twisting microscope. In a scanning tunneling microscope, because tunneling happens at an atomic-scale location between the tip and the sample, the momentum in the transverse direction is not conserved - that is, the tunneling averages over a huge range of \(\mathbf{k}\) vectors for the tunneling electron. In the quantum twisting microscope, electrons tunnel from a flat (graphite) patch something like \(d \sim\) 100 nm across, coherently, through a couple of layers of some insulator (like WSe2) and into a van der Waals sample. In this case, \(\mathbf{k}\) in the plane is comparatively conserved, and by rotating the sample relative to the tip, it is possible to build up a picture of the sample's electronic energy vs. \(\mathbf{k}\) dispersion, rather like in angle-resolved photoemission. This has allowed, e.g., mapping of phonons via inelastic tunneling. His group has applied this to magic angle twisted bilayer graphene, a system that has a peculiar combination of properties, where in some ways the electrons act like very local objects, and in other ways they act like delocalized objects. The answer seems to be that this system at the magic angle is a bit of an analog of a heavy fermion system, where there are sort of local moments (living in very flat bands) interacting and hybridizing with "conduction" electrons (bands crossing the Fermi level at the Brillouin zone center). The experimental data (movies of the bands as a function of energy and \(\mathbf{k}\) in the plane as the filling is tuned via gate) are gorgeous and look very much like theoretical models. I saw a talk by Roger Melko about applying large language models to try to get efficient knowledge of many-body quantum states, or at least the possible outputs of evolution of a quantum system like a quantum computer based on Rydberg atoms. It started fairly pedagogically, but I confess that I got lost in the AI/ML jargon about halfway through. Francis M. Ross, recipient of this year's Keithley Award, gave a great talk about using transmission electron microscopy to watch the growth of materials in real time. She had some fantastic videos - here is a review article about some of the techniques used. She also showed some very new work using a focused electron beam to make arrays of point defects in 2D materials that looks very promising. Steve Kivelson, recipient of this year's Buckley Prize, presented a very nice talk about his personal views on the theory of high temperature superconductivity in the cuprates. One basic point: these materials are balancing between multiple different kinds of emergent order (spin density waves, charge density waves, electronic nematics, perhaps pair density waves). This magnifies the effects of quenched disorder, which can locally tip the balance one way or another. Recent investigations of the famous 2D square lattice Hubbard model show this as well. He argues that the ground state of the Hubbard model for a broad range \(1/2 < U/t < 8\), where \(U\) is the on-site repulsion and \(t\) is the hopping term, the ground state is in fact a charge density wave, not a superconductor. However, if there is some amount of disorder in the form of \(\delta t/t \sim 0.1-0.2\), the result is a robust, unavoidable superconducting state. He further argues that increasing the superconducting transition temperature requires striking a balance between the underdoped case (strong pairing, weak superfluid phase stiffness) and the overdoped case (weak pairing, strong superfluid stiffness), and that one way to achieve this would be in a bilayer with broken mirror symmetry (say different charge reservoir layers above and below, and/or a big displacement field perpendicular to the plane). (Apologies for how technical that sounded - hard to reduce that one to something super accessible without writing much more.) A bit more tomorrow before I depart back to Houston.
I spent a portion of today catching up with old friends and colleagues, so fewer highlights, but here are a couple: Like a few hundred other people, I went to the invited talk by Chetan Nayak, leader of Microsoft's quantum computing effort. It was sufficiently crowded that the session chair warned everyone about fire code regulations and that people should not sit on the floor blocking the aisles. To set the landscape: Microsoft's approach to quantum computing is to develop topological qubits based on interesting physics that is predicted to happen (see here and here) if one induces superconductivity (via the proximity effect) in a semiconductor nanowire with spin-orbit coupling. When the right combination of gate voltage and external magnetic field is applied, the nanowire should cross into a topologically nontrivial state with majorana fermions localized to each end of the nanowire, leading to "zero energy states" seen as peaks in the conductance \(dI/dV\) centered at zero bias (\(V=0\)). A major challenge is that disorder in these devices can lead to other sources of zero-bias peaks (Andreev bound states). A 2023 paper outlines a protocol that is supposed to give good statistical feedback on whether a given device is in the topologically interesting or trivial regime. I don't want to rehash the history of all of this. In a paper published last month, a single proximitized, gate-defined InAs quantum wire is connected to a long quantum dot to form an interferometer, and the capacitance of that dot is sensed via RF techniques as a function of the magnetic flux threading the interferometer, showing oscillations with period \(h/2e\), interpreted as charge parity oscillations of the proximitized nanowire. In new data, not yet reported in a paper, Nayak presented measurements on a system comprising two such wires and associated other structures. The argument is that each wire can be individually tuned simultaneously into the topologically nontrivial regime via the protocol above. Then interferometer measurements can be performed in one wire (the Z channel) and in a configuration involving two ends of different wires (the X channel), and they interpret their data as early evidence that they have achieved the desired majorana modes and their parity measurements. I look forward to when a paper is out on this, as it is hard to make informed statements about this based just on what I saw quickly on slides from a distance. In a completely different session, Garnet Chan gave a very nice talk about applying advanced quantum chemistry and embedding techniques to look at some serious correlated materials physics. Embedding methods are somewhat reminiscent of mean field theories: Instead of trying to solve the Schrödinger equation for a whole solid, for example, you can treat the solid as a self-consistent theory of a unit cell or set of unit cells embedded in a more coarse-grained bath (made up of other unit cells appropriately averaged). See here, for example. He presented recent results on computing the Kondo effect of magnetic impurities in metals, understanding the trends of antiferromagnetic properties of the parent cuprates, and trying to describe superconductivity in the doped cuprates. Neat stuff. In the same session, my collaborator Silke Buehler-Paschen gave a nice discussion of ways to use heavy fermion materials to examine strange metals, looking beyond just resistivity measurements. Particularly interesting is the idea of trying to figure out quantum Fisher information, which in principle can tell you how entangled your many-body system is (that is, estimating how many other degrees of freedom are entangled with one particular degree of freedom). See here for an intro to the idea, and here for an implementation in a strange metal, Ce3Pd20Si6. More tomorrow.... (On a separate note, holy cow, the trade show this year is enormous - seems like it's 50% bigger than last year. I never would have dreamed when I was a grad student that you could go to this and have your pick of maybe 10 different dilution refrigerator vendors. One minor mystery: Who did World Scientific tick off? Their table is located on the completely opposite side of the very large hall from every other publisher.)
The APS Global Physics Summit is an intimate affair, with a mere 14,000 attendees, all apparently vying for lunch capacity for about 2,000 people. The first day of the meeting was the usual controlled chaos of people trying to learn the layout of the convention center while looking for talks and hanging out having conversations. On the plus side, the APS wifi seems to function well, and the projectors and slide upload system are finally technologically mature (though the pointers/clickers seem to have some issues). Some brief highlights of sessions I attended: I spent the first block of time at this invited session about progress in understanding quantum spin liquids and quantum spin ice. Spin ices are generally based on the pyrochlore structure, where atoms hosting local magnetic moments sit at the vertices of corner-sharing tetrahedra, as I had discussed here. The idea is that the crystal environment and interactions between spins are such that the moments are favored to satisfy the ice rules, where in each tetrahedron two moments point inward toward the center and two point outward. Classically there are a huge number of spin arrangements that all have about the same ground state energy. In a quantum spin ice, the idea is that quantum fluctuations are large, so that the true ground state would be some enormous superposition of all possible ice-rule-satistfying configurations. One consequence of this is that there are low energy excitations that look like an emergent form of electromagnetism, including a gapless phonon-like mode. Bruce Gaulin spoke about one strong candidate quantum spin ice, Ce2Zr2O7, in a very pedagogical talk that covered all this. A relevant recent review is this one. There were two other talks in the session also about pyrochlores, an experimentally focused one by Sylvain Petit discussing Tb2Ti2O7 (see here), and a theory talk by Yong-Baek Kim focused again on the cerium zirconate. Also in the session was an interesting talk by Jeff Rau about K2IrCl6, a material with a completely different structure that (above its ordering temperature of 3 K) acts like a "nodal line spin liquid". In part because I had students speaking there, I also attended a contributed session about nanomaterials (wires, tubes, dots, particles, liquids). There were some neat talks. The one that I found most surprising was from the Cha group at Cornell, where they were using a method developed by the Schroer group at Yale (see here and here) to fabricate nanowires of two difficult to grow, topologically interesting metals, CoIn3 and RhIn3. The idea is to create a template with an array of tubular holes, and squeeze that template against a bulk crystal of the desired material at around 350C, so that the crystal is extruded into the holes to form wires. Then the template can be etched away and the wires recovered for study. I'm amazed that this works. In the afternoon, I went back and forth between the very crowded session on fractional quantum anomalous Hall physics in stacked van der Waals materials, and a contributed session about strange metals. Interesting stuff for sure. I'm still trying to figure out what to see tomorrow, but there will be another update in the evening.
More in science
A brief proposal to fix Social Security and grow the population
[Note that this article is a transcript of the video embedded above.] Even though it’s a favorite vacation destination, the beach is surprisingly dangerous. Consider the lifeguard: There aren’t that many recreational activities in our lives that have explicit staff whose only job is to keep an eye on us, make sure we stay safe, and rescue us if we get into trouble. There are just a lot of hazards on the beach. Heavy waves, rip currents, heat stress, sunburn, jellyfish stings, sharks, and even algae can threaten the safety of beachgoers. But there’s a whole other hazard, this one usually self-inflicted, that usually doesn’t make the list of warnings, even though it takes, on average, 2-3 lives per year just in the United States. If you know me, you know I would never discourage that act of playing with soil and sand. It’s basically what I was put on this earth to do. But I do have one exception. Because just about every year, the news reports that someone was buried when a hole they dug collapsed on top of them. There’s no central database of sandhole collapse incidents, but from the numbers we do have, about twice as many people die this way than from shark attacks in the US. It might seem like common sense not to dig a big, unsupported hole at the beach and then go inside it, but sand has some really interesting geotechnical properties that can provide a false sense of security. So, let’s use some engineering and garage demonstrations to explain why. I’m Grady and this is Practical Engineering. In some ways, geotechnical engineering might as well be called slope engineering, because it’s a huge part of what they do. So many aspects of our built environment rely on the stability of sloped earth. Many dams are built from soil or rock fill using embankments. Roads, highways, and bridges rely on embankments to ascend or descend smoothly. Excavations for foundations, tunnels, and other structures have to be stable for the people working inside. Mines carefully monitor slopes to make sure their workers are safe. Even protecting against natural hazards like landslides requires a strong understanding of geotechnical engineering. Because of all that, the science of slope stability is really deeply understood. There’s a well-developed professional consensus around the science of soil, how it behaves, and how to design around its limitations as a construction material. And I think a peek into that world will really help us understand this hazard of digging holes on the beach. Like many parts of engineering, analyzing the stability of a slope has two basic parts: the strengths and the loads. The job of a geotechnical engineer is to compare the two. The load, in this case, is kind of obvious: it’s just the weight of the soil itself. We can complicate that a bit by adding loads at the top of a slope, called surcharges, and no doubt surcharge loads have contributed to at least a few of these dangerous collapses from people standing at the edge of a hole. But for now, let’s keep it simple with just the soil’s own weight. On a flat surface, soils are generally stable. But when you introduce a slope, the weight of the soil above can create a shear failure. These failures often happen along a circular arc, because an arc minimizes the resisting forces in the soil while maximizing the driving forces. We can manually solve for the shear forces at any point in a soil mass, but that would be a fairly tedious engineering exercise, so most slope stability analyses use software. One of the simplest methods is just to let the software draw hundreds of circular arcs that represent failure planes, compute the stresses along each plane based on the weight of the soil, and then figure out if the strength of the soil is enough to withstand the stress. But what does it really mean for a soil to have strength? If you can imagine a sample of soil floating in space, and you apply a shear stress, those particles are going to slide apart from each other in the direction of the stress. The amount of force required to do it is usually expressed as an angle, and I can show you why. You may have done this simple experiment in high school physics where you drag a block along a flat surface and measure the force required to overcome the friction. If you add weight, you increase the force between the surfaces, called the normal force, which creates additional friction. The same is true with soils. The harder you press the particles of soil together, the better they are at resisting a shear force. In a simplified force diagram, we can draw a normal force and the resulting friction, or shear strength, that results. And the angle that hypotenuse makes with the normal force is what we call the friction angle. Under certain conditions, it’s equal to the angle of repose, the steepest angle that a soil will naturally stand. If I let sand pour out of this funnel onto the table, you can see, even as the pile gets higher, the angle of the slope of the sides never really changes. And this illustrates the complexity of slope stability really nicely. Gravity is what holds the particles together, creating friction, but it’s also what pulls them apart. And the angle of repose is kind of a line between gravity’s stabilizing and destabilizing effects on the soil. But things get more complicated when you add water to the mix. Soil particles, like all things that take up space, have buoyancy. Just like lifting a weight under water is easier, soil particles seem to weigh less when they’re saturated, so they have less friction between them. I can demonstrate this pretty easily by just moving my angle of repose setup to a water tank. It’s a subtle difference, but the angle of repose has gone down underwater. It’s just because the particle’s effective weight goes down, so the shear strength of the soil mass goes down too. And this doesn’t just happen under lakes and oceans. Soil holds water - I’ve covered a lot of topics on groundwater if you want to learn more. There’s this concept of the “water table” below which, the soils are saturated, and they behave in the same way as my little demonstration. The water between the particles, called “pore water” exerts pressure, pushing them away from one another and reducing the friction between them. Shear strength usually goes down for saturated soils. But, if you’ve played with sand, you might be thinking: “This doesn’t really track with my intuitions.” When you build a sand castle, you know, the dry sand falls apart, and the wet sand holds together. So let’s dive a little deeper. Friction actually isn’t the only factor that contributes to shear strength in a soil. For example, I can try to shear this clay, and there’s some resistance there, even though there is no confining force pushing the particles together. In finer-grained soils like clay, the particles themselves have molecular-level attractions that make them, basically, sticky. The geotechnical engineers call this cohesion. And it’s where sand gets a little sneaky. Water pressure in the pores between particles can push them away from each other, but it can also do the opposite. In this demo, I have some dry sand in a container with a riser pipe to show the water table connected to the side. And I’ve dyed my water black to make it easier to see. When I pour the water into the riser, what do you think is going to happen? Will the water table in the soil be higher, lower, or exactly the same as the level in the riser? Let’s try it out. Pretty much right away, you can see what happens. The sand essentially sucks the water out of the riser, lifting it higher than the level outside the sand. If I let this settle out for a while, you can see that there’s a pretty big difference in levels, and this is largely due to capillary action. Just like a paper towel, water wicks up into the sand against the force of gravity. This capillary action actually creates negative pressure within the soil (compared to the ambient air pressure). In other words, it pulls the particles against each other, increasing the strength of the soil. It basically gives the sand cohesion, additional shear strength that doesn’t require any confining pressure. And again, if you’ve played with sand, you know there’s a sweet spot when it comes to water. Too dry, and it won’t hold together. Too wet, same thing. But if there’s just enough water, you get this strengthening effect. However, unlike clay that has real cohesion, that suction pressure can be temporary. And it’s not the only factor that makes sand tricky. The shear strength of sand also depends on how well-packed those particles are. Beach sand is usually well-consolidated because of the constant crashing waves. Let’s zoom in on that a bit. If the particles are packed together, they essentially lock together. You can see that to shear them apart doesn’t just look like a sliding motion, but also a slight expansion in volume. Engineers call this dilatancy, and you don’t need a microscope to see it. In fact, you’ve probably noticed this walking around on the beach, especially when the water table is close to the surface. Even a small amount of movement causes the sand to expand, and it’s easy to see like this because it expands above the surface of the water. The practical result of this dilatant property is that sand gets stronger as it moves, but only up to a point. Once the sand expands enough that the particles are no longer interlocked together, there’s a lot less friction between them. If you plot movement, called strain, against shear strength, you get a peak and then a sudden loss of strength. Hopefully you’re starting to see how all this material science adds up to a real problem. The shear strength of a soil, basically its ability to avoid collapse, is not an inherent property: It depends on a lot of factors; It can change pretty quickly; And this behavior is not really intuitive. Most of us don’t have a ton of experience with excavations. That’s part of the reason it’s so fun to go on the beach and dig a hole in the first place. We just don’t get to excavate that much in our everyday lives. So, at least for a lot of us, it’s just a natural instinct to do some recreational digging. You excavate a small hole. It’s fun. It’s interesting. The wet sand is holding up around the edges, so you dig deeper. Some people give up after the novelty wears off. Some get their friends or their kids involved to keep going. Eventually, the hole gets big enough that you have to get inside it to keep digging. With the suction pressure from the water and the shear strengthening through dilatancy, the walls have been holding the entire time, so there’s no reason to assume that they won’t just keep holding. But inside the surrounding sand, things are changing. Sand is permeable to water, meaning water moves through it pretty freely. It doesn’t take a big change to upset that delicate balance of wetness that gives sand its stability. The tide could be going out, lowering the water table and thus drying the soil at the surface out. Alternatively, a wave or the tide could add water to the surface sand, reducing the suction pressure. At the same time, tiny movements within the slopes are strengthening the sand as it tries to dilate in volume. But each little movement pushes toward that peak strength, after which it suddenly goes away. We call this a brittle failure because there’s little deformation to warn you that there’s going to be a collapse. It happens suddenly, and if you happen to be inside a deep hole when it does, you might be just fine, like our little friend here, but if a bigger section of the wall collapses, your chance of surviving is slim. Soil is heavy. Sand has about two-and-a-half times the density of water. It just doesn’t take that much of it to trap a person. This is not just something that happens to people on vacations, by the way. Collapsing trenches and excavations are one of the most common causes of fatal construction incidents. In fact, if you live in a country with workplace health and safety laws, it’s pretty much guaranteed that within those laws are rules about working in trenches and excavations. In the US, OSHA has a detailed set of guidelines on how to stay safe when working at the bottom of a hole, including how steep slopes can be depending on the types of soil, and the devices used to shore up an excavation to keep it from collapsing while people are inside. And for certain circumstances where the risks get high enough or the excavation doesn’t fit neatly into these simplified categories, they require a professional engineer be involved. So does all this mean that anyone who’s not an engineer just shouldn’t dig holes at the beach. If you know me, you know I would never agree with that. I don’t want to come off too earnest here, but we learn through interaction. Soil and rock mechanics are incredibly important to every part of the built environment, and I think everyone should have a chance to play with sand, to get muddy and dirty, to engage and connect and commune with the stuff on which everything gets built. So, by all means, dig holes at the beach. Just don’t dig them so deep. The typical recommendation I see is to avoid going in a hole deeper than your knees. That’s pretty conservative. If you have kids with you, it’s really not much at all. If you want to follow OSHA guidelines, you can go a little bigger: up to 20 feet (or 6 meters) in depth, as long as you slope the sides of your hole by one-and-a-half to one or about 34 degrees above horizontal. You know, ultimately you have to decide what’s safe for you and your family. My point is that this doesn’t have to be a hazard if you use a little engineering prudence. And I hope understanding some of the sneaky behaviors of beach sand can help you delight in the primitive joy of digging a big hole without putting your life at risk in the process.
Life in the state of nature was less violent than you might think. But this made them vulnerable to a few psychopaths.
hot combs—they all obviously benefited from the jolt of electrification. But the eraser? What was so problematic about the humble eraser that it needed electrifying? 1935 patent application for an apparatus for erasing, “Hand held rubbers are clumsy and cover a greater area than may be required.” Aye, there’s the rub, as it were. Lukowski’s cone-tipped electric eraser, he argued, could better handle the fine detail. Consider the careful technique Roscoe C. Sloane and John M. Montz suggest in their 1930 book Elements of Topographic Drawing. To make a correction to a map, these civil engineering professors at Ohio State University recommend the following steps: With a smooth, sharp knife pick the ink from the paper. This can be done without marring the surface. Place a hard, smooth surface, such as a [drafting] triangle, under the erasure before rubbing starts. When practically all the ink has been removed with the knife, rub with a pencil eraser. Erasing was not for the faint of heart! A Brief History of the Eraser Where did the eraser get its start? The British scientist Joseph Priestley is celebrated for his discovery of oxygen and not at all celebrated for his discovery of the eraser. Around 1766, while working on The History and Present State of Electricity, he found himself having to draw his own illustrations. First, though, he had to learn to draw, and because any new artist naturally makes mistakes, he also needed to erase. In 1766 or thereabouts, Joseph Priestley discovered the erasing properties of natural rubber.Alamy Alas, there weren’t a lot of great options for erasing at the time. For items drawn in ink, he could use a knife to scrape away errors; pumice or other rough stones could also be used to abrade the page and remove the ink. To erase pencil, the customary approach was to use a piece of bread or bread crumbs to gently grind the graphite off the page. All of the methods were problematic. Without extreme care, it was easy to damage the paper. Using bread was also messy, and as the writer and artist John Ruskin allegedly said, a waste of perfectly good bread. Priestley may have discovered this attribute of rubber, but Edward Nairne, an inventor, optician, and scientific-instrument maker, marketed it for sale. For three shillings (about one day’s wages for a skilled tradesman), you could purchase a half-inch (1.27-cm) cube of the material. Priestley acknowledged Nairne in the preface of his 1770 tutorial on how to draw, A Familiar Introduction to the Theory and Practice of Perspective, noting that caoutchouc was “excellently adapted to the purpose of wiping from paper the marks of a black-lead-pencil.” By the late 1770s, cubes of caoutchouc were generally known as rubbers or lead-eaters. What was so problematic about the humble eraser that it needed electrifying? Luckily, there were lots of other people looking for ways to improve natural rubber, and in 1839 Charles Goodyear developed the vulcanization process. By adding sulfur to natural rubber and then heating it, Goodyear discovered how to stabilize rubber in a firm state, what we would call today the thermosetting of polymers. In 1844 Goodyear patented a process to create rubber fabric. He went on to make rubber shoes and other products. (The tire company that bears his name was founded by the brothers Charles and Frank Seiberling several decades later.) Goodyear unfortunately died penniless, but we did get a better eraser out of his discovery. Who Really Invented the Electric Eraser? Albert Dremel, who opened his eponymous company in 1932, often gets credit for the invention of the electric eraser, but if that’s true, I can find no definitive proof. Out of more than 50 U.S. patents held by Dremel, none are for an electric eraser. In fact, other inventors may have a better claim, such as Homer G. Coy, who filed a patent for an electrified automatic eraser in 1927, or Ola S. Pugerud, who filed a patent for a rotatable electric eraser in 1906. The Dremel Moto-Tool, introduced in 1935, came with an array of swappable bits. One version could be used as an electric eraser.Dremel In 1935 Dremel did come out with the Moto-Tool, the world’s first handheld, high-speed rotary tool that had interchangeable bits for sanding, engraving, burnishing, and sharpening. One version of the Moto-Tool was sold as an electric eraser, although it was held more like a hammer than a pencil. Introduction to Cataloging and the Classification of Books. She described a flat, round rubber eraser mounted on a motor-driven instrument similar to a dentist’s drill. The eraser could remove typewriting and print from catalog cards without leaving a rough appearance. By 1937, discussions of electric erasers were part of the library science curriculum at Columbia University. Electric erasers had gone mainstream. To erase pencil, the customary approach was to use a piece of bread to gently grind the graphite off the page. In 1930, the Charles Bruning Co.’s general catalog had six pages of erasers and accessories, with two pages devoted to the company’s electric erasing machine. Bruning, which specialized in engineering, drafting, and surveying supplies, also offered a variety of nonelectrified eraser products, including steel erasers (also known as desk knives), eraser shields (used to isolate the area to be erased), and a chisel-shaped eraser to put on the end of a pencil. Loren Specialty Manufacturing Co. arrived late to the electric eraser game, introducing its first such product in 1953. Held in the hand like a pen or pencil, the Presto electric eraser would vibrate to abrade a small area in need of correction. The company spun off the Presto brand in 1962, about the time the Presto Model 80 [shown at top] was produced. This particular unit was used by officer workers at the New York Life Insurance Co. and is now housed at the Smithsonian’s Cooper Hewitt. The Creativity of the Eraser When I was growing up, my dad kept an electric eraser next to his drafting table. I loved playing with it, but it wasn’t until I began researching this article that I realized I had been using it all wrong. The pros know you’re supposed to shape the cylindrical rubber into a point in order to erase fine lines. Darrel Tank, who specializes in pencil drawings. I watched several of his surprisingly fascinating videos comparing various models of electric erasers. Seeing Tank use his favorite electric eraser to create texture on clothing or movement in hair made me realize that drawing is not just an additive process. Sometimes it is what’s removed that makes the difference. - YouTube Susan Piedmont-Palladino, an architect and professor at Virginia Tech’s Washington-Alexandria Architecture Center, has also thought a lot about erasing. She curated the exhibit “Tools of the Imagination: Drawing Tools and Technologies from the Eighteenth Century to the Present” at the National Building Museum in 2005 and authored the companion book of the same title. Piedmont-Palladino describes architectural design as a long process of doing, undoing, and redoing, deciding which ideas can stay and which must go. Of course, the pencil, the eraser (electric or not), and the computer are all just tools for transmitting and visualizing ideas. The tools of any age reflect society in ways that aren’t always clear until new tools come to replace them. Both the pencil and the eraser had to be invented, and it is up to historians to make sure they aren’t forgotten. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the April 2025 print issue as “When Electrification Came for the Eraser.” References The electric eraser, more than any object I have researched for Past Forward, has the most incorrect information about its history on the Internet—wrong names, bad dates, inaccurate assertions—which get repeated over and over again as fact. It’s a great reminder of the need to go back to original sources. As always, I enjoyed digging through patents to trace the history of invention and innovation in electric erasers. Other primary sources I consulted include Margaret Mann’s Introduction to Cataloging and the Classification of Books, a syllabus to Columbia University’s 1937 course on Library Service 201, and the Charles Bruning Co.’s 1930 catalog. Although Henry Petroski’s The Pencil: A History of Design and Circumstance only has a little bit of information on the history of erasers, it’s a great read about the implement that does the writing that needs to be erased.