More from nanoscale views
Back in the dawn of the 21st century, the American Chemical Society founded a new journal, Nano Letters, to feature letters-length papers about nanoscience and nanotechnology. This was coincident with the launch of the National Nanotechnology Initiative, and it was back before several other publishers put out their own nano-focused journals. For a couple of years now I've been an associate editor at NL, and it was a lot of fun to work with my fellow editors on putting together this roadmap, intended to give a snapshot of what we think the next quarter century might hold. I think some of my readers will get a kick out of it.
The fall semester is about to begin at my university, and I'm going to be teaching undergraduate statistical and thermal physics. This is a course I've taught before, last full term in 2019, and the mass availability of large language models and generative AI tools have changed the world in the interim. We've all seen the headlines and articles about how some of these systems can be very good at solving traditional homework and exam problems. Many of these tools are capable of summarizing written material and writing essays that are very readable. Higher education is wrestling with the essential question: What is the right working relationship between students, teachers, and these tools, one that benefits and actually educates students (both about subject matter and the use of these tools)? Personalized individual AI tutoring seems like it could be great for teaching huge numbers of people. Conversely, if all we are doing is teaching students to copy-paste assignments into the homework-answer-machine, clearly we are failing students at multiple levels. The quote in the image here (from Kathy Hepinstall Parks) is one that I came across this week that originates in the FAQ from a writers workshop. For my purposes I could paraphrase: Why should we learn physics (or any other science or engineering discipline) when a machine already knows the formalism and the answers? On some level, this has been a serious question since the real advent of search engines. The sum total of human knowledge is available at a few keystrokes. Teaching students just rote recall of facts is approaching pointless (though proficiency can be hugely important in some circumstances - I want a doctor who can diagnose and treat ailments without having to google a list of my symptoms.). My answer to this question is layered. First, I would argue that beyond factual content we are teaching students how to think and reason. This is and I believe will remain important, even in an era when AI tools are more capable and reliable than at present. I like to think that there is some net good in training your brain to work hard, to reason your way through complicated problems (in the case of physics, formulating and then solving and testing models of reality). It's hard for me to believe that this is poor long-term strategy. Second, while maybe not as evocative as the way creative expression is described in the quote, there is real accomplishment (in your soul?) in actually learning something yourself. A huge number of people are better at playing music than I am, but that doesn't mean it wasn't worthwhile to me to play the trumpet growing up. Overworked as referencing Feynman is, the pleasure of finding things out is real. AI/LLMs can be great tools for teachers. There are several applet-style demos that I've put off making for years because of how long it would take for me to code them up nicely. With these modern capabilities, I've been able to make some of these now, in far less time than it would otherwise have taken, and students will get the chance to play with them. Still, the creativity involved in what demos to make and how they should look and act was mine, based on knowledge and experience. People still have a lot to bring to the process, and I don't think that's going to change for a very long time.
Amazingly, this blog has now been around for more than twenty years (!) - see this first post for reference from June of 2005, when I had much less gray hair and there were a lot more science blogs. Thanks to all of you for sticking around. Back then, when I debuted my writing to my loyal readers (all five of them at the time), I never thought I'd keep this up. Some info, including stats according to blogger: Total views: 8.3M Most views in one day, this past May 31, with 272K Top two most-viewed posts are this one from 2023 with a comment thread about Ranga Dias, and this one from 2009 titled "What is a plasmon?" Just a reminder that I have collected a bunch of condensed matter terms and concept posts here. I've also written some career-related posts, like a guide to faculty job searches, advice on choosing a graduate school, needs-to-be-updated advice on postdoc positions, etc. Some personal favorite posts, some of which I wish had gotten more notice, include the physics of drying your hands, the physics of why whiskey stones aren't as good as ice to cool your drink, materials and condensed matter in science fiction, the physics of vibranium, the physics of beskar, the physics of ornithopters, and why curving your pizza slice keeps if from flopping over. I'm also happy with why soft matter is hard, which was a well-viewed post. I also like to point out my essay about J. Henrik Schön, because I worry that people have forgotten about that episode. Real life has intruded quite a bit into my writing time the last couple of years, but I hope to keep doing this for a while longer. I also still hope one day to find the right time and approach to write a popular book about the physics of materials, why they are amazing, and why our understanding of this physics, limited as it is, is still an astonishing intellectual achievement. Two other things to read that I came across this week: This post about Maxwell's Demon from the Skull in the Stars blog (which has been around nearly as long as mine!) is an excellent and informative piece of writing. I'm definitely pointing my statistical and thermal physics undergraduate class to this next month. Ross McKenzie has a very nice looking review article up on the arXiv about emergence. I haven't read it yet, but I have no doubt that it will be well-written and thought-provoking.
It's been a busy time that has cut into my blogging, but I wanted to point out some links from the past couple of weeks. Physics Today has a cover article this past issue about what is colloquially known as static electricity, but what is more technically described as triboelectricity, the transfer of charge between materials by rubbing. I just wrote about this six months ago, and the detailed mechanisms remain poorly understood. Large surface charge densities (like \(10^{12}\) electronic charges per square cm) can be created this way on insulators, leading to potential differences large enough to jump a spark from your finger to the door handle. This can also lead to static electric fields near surfaces that are not small and can reveal local variations in material properties. That leads right into this paper (which I learned about from here) about the extreme shapes of the heads of a family of insects called treehoppers. These little crawlies have head and body shapes that often have cuspy, pointy bits that stick out - spines, horns, etc. As we learn early on about electrostatics, elongated and pointy shapes tend to lead to large local electric fields and field gradients. The argument of this paper is that the spiky body and cranial morphology can help these insects better sense electric field distributions, and this makes it easier for them to find their way and avoid predators. This manuscript on the arXiv this week is a particularly nice, pedagogical review article (formatted for Rev Mod Phys) about quantum geometry and Berry curvature in condensed matter systems. I haven't had the chance to read it through, but I think this will end up being very impactful and a true resource for students to learn about these topics. Another very pretty recent preprint is this one, which examines the electronic phase diagram of twisted bilayers of WSe2, with a relative twist angle of 4.6°. Much attention has been paid to the idea that moiré lattices can be in a regime seemingly well described by a Hubbard-like model, with an on-site Coulomb repulsion energy \(U\) and an electronic bandwidth \(W\). This paper shows an exceptionally clean example of this, where disorder seems to be very weak, electron temperatures are quite cold, and phase diagrams are revealed that look remarkably like the phenomena seen in the cuprate superconductors (superconducting "domes" as a function of charge density adjacent to antiferromagnetic insulating states, and with "strange metal" linear-in-\(T\) resistance in the normal state near the superconducting charge density). Results like this make me more optimistic about overcoming some of the major challenges in using twisted van der Waals materials as simulators of hard-to-solve hamilitonians. I was all set to post this earlier today, with no awful news for once about science in the US that I felt compelled to discuss, but I got sidetracked by real work. Then, late this afternoon, this executive order about federal grants was released. I can't sugar coat it - it's awful. Ignoring a large volume of inflammatory rhetoric, it contains this gem, for instance: "The grant review process itself also undermines the interests of American taxpayers." It essentially tries to bar any new calls for proposals until a new (and problematic) process is put in place at every agency (see Sect. 3(c)). Also, it says "All else being equal, preference for discretionary awards should be given to institutions with lower indirect cost rates." Now, indirect cost rates are set by negotiations between institutions and the government. Places that only do very small volumes of research have low rates, so get ready for MIT to get fewer grants and Slippery Rock University to get more. The only certainty is that the nation's lawyers are going to have a field day with all the suits that will come out of this.
The beginning of a RET poster session Research Experience for Teachers (RET) programs are an example of the kind of programs that the National Science Foundation funds which are focused on K12 (and broader) education. This summer I hosted a high school physics teacher in my lab for 6 weeks, where he worked on a brief project, with one of my doctoral students helping out in a mentoring role. Just yesterday was the big poster session for all of the participants in the program, and it was very enjoyable to talk with a whole cadre of high school science teachers from across the greater Houston area about their projects and their experiences. Readers may be more familiar with the sibling Research Experience for Undergraduates (REU) programs, which give undergraduate students the chance to work for 10 weeks or so in a lab that is very likely not at their home institution. REUs are a great way for students interested in research to get broad exposure to new topics, meet people and acquire new skills, and for some, figure out whether they like research (and maybe which topics are exciting to them). The educational goal of REUs is clear: providing direct research experience to interested undergrads, ideally while advancing a research project and for some small fraction of students resulting in an eventual publication. RET programs are different: They are intended as professional development. The teachers are exposed to new topics, hopefully a fun research environment, and they are encouraged to think carefully about how they can take the concepts they learn and translate those for the classroom. I am very much not an expert in education research, but there is evidence (see here, for example) that teachers who participate in these programs get a great deal of satisfaction and have lower attrition from teaching professions. (Note that it's hard to do statistics well on questions like that, since the population of teachers that seek out opportunities like this may be a special subset of the total population of teachers.) An idea that makes sense to me: Enhancing the motivation and job satisfaction of a teacher can have a larger cumulative impact on educating students than an individual research project for a single student. It would be a great shame if RET and REU programs are victims of large-scale cuts at NSF. The NSF is the only science agency with education as part of its mission (at least historically). All the more reason to try to persuade appropriators to not follow the draconian presidential budget request for the agency.
More in science
Do these ideas give you the ick? Or is there something interesting here
France built forty nuclear reactors in a decade. Here's what the world can learn from it.
So far this year, fires have burned more than 1.5 million acres across northern Portugal and northwest Spain, killing eight people and forcing tens of thousands to evacuate. The bulk of the wildfires coincided with a brutal heat wave in August, the most intense on record in Spain, which helped set the stage for the devastating burns, experts say. Read more on E360 →
Researchers have just presented the results of a collaboration among 22 neuroscience labs mapping the activity of the mouse brain down to the individual cell. The goal was to see brain activity during decision-making. Here is a summary of their findings: “Representations of visual stimuli transiently appeared in classical visual areas after stimulus onset and […] The post Charting The Brain’s Decision-Making first appeared on NeuroLogica Blog.
Have you ever tried programming with a language that uses musical notation? What about a language that never runs programs the same way? What about a language where you write code with photographs? All exist, among many others, in the world of esoteric programming languages, and Daniel Temkin has written a forthcoming book covering 44 of them, some of which exist and are usable to some interpretation of the word “usable.” The book, Forty-Four Esolangs: The Art of Esoteric Code, is out on 23 September, published by MIT Press. I was introduced to Temkin’s work at the yearly Free and Open source Software Developer’s European Meeting (FOSDEM) event in Brussels in February. FOSDEM is typically full of strange and wonderful talks, where the open-source world gets to show its more unusual side. In Temkin’s talk, which I later described to a friend as “the most FOSDEM talk of 2025,” he demonstrated Valence, a programming language that uses eight ancient Greek measuring and numeric symbols. Temkin’s intention with Valence was to emulate the same ambiguity that human language has. This is the complete opposite of most programming languages, where syntax typically tries to be explicit and unambiguous. “Just as you could create an English sentence like, ‘Bob saw the group with the telescope,’ and you can’t quite be sure of whether it’s Bob who has the telescope and he’s seeing the group through it, or if it’s the group that has the telescope,” he says. “What if we wrote code that way so you could write something, and now you have two potential programs? One where Bob has a telescope and one where the group has a telescope.” How Esoteric Languages Spark Creativity Creating a language or an interpreter has often been the proving ground of many engineers and programmers, and esoteric languages are almost as old as non-esoteric ones. Temkin says his current effort has a lot to do with AI-generated code that seeks to do nothing but provide seemingly straight solutions to problems, removing any sense of creativity. Esoteric languages inherently make little sense and frequently serve little purpose, making them conceptually completely counter to AI-generated code and thus often not even understood by them—almost the code equivalent of wearing clothing to confuse facial recognition software. While the syntax of esoteric languages may be hard to understand, the actual programming stack is often wonderfully simple. Temkin believes that part of the appeal is also to explore the complexity of modern programming. “I come back a lot to an essay by Joseph Weizenbaum, the creator of the Eliza Chatbot, about compulsiveness and code,” he says. “He described ‘the computer bomb,’ the person who writes code and becomes obsessed with getting everything perfect, but it doesn’t work the way they want. The computer is under their control. It’s doing what they’re telling it to do, but it’s not doing what they actually want it to do.” “So they make it more complicated, and then it works the way they want,” Temkin adds. “This is the classic bind in programming. We command the machine when we’re writing code, but how much control do we really have over what happens? I think that we’re now all used to the idea that much of what’s out there in terms of code is broken in some way.” Temkin explored the idea of control in his language Olympus, where the interpreter consists of a series of Greek gods, each of which will do specific things, but only if asked the right way. Temkin’s Olympus language includes an interpreter consisting of Greek gods, which must be asked to do things in the proper way.Daniel Temkin “One example regarding complicating our relationship with the machine and how much we’re in control is my language, Olympus, where code is written to please different Greek gods,” says Temkin. “The basic idea of the language is that you write in pseudo-natural language style, asking various Greek gods to construct code the way that you want it to be. It’s almost as if there’s a layer behind the code, which is the actual code. “You’re not actually writing the code,” Temkin adds. “You’re writing pleas to create that code, and you have to ask nicely. For example, if you call Zeus father of the gods, you can’t call him that again immediately because he doesn’t think you’re trying very hard.” “And then of course, to end a block of code, you have to call on Hades to collect the souls of all the unused variables. And so on,” Temkin says. The History of Esoteric Programming Languages Temkin continues a long-running tradition: esoteric languages date back to the early days of computing, with examples such as INTERCAL (1972), which had cryptic syntax, meaning coders often needed to plead with the compiler to run it. The scene gained momentum in 1993, with Wouter van Oortmerssen’s FALSE, in which most syntax maps to a single character. Despite this, FALSE is a Turing-complete language that allows creating programs as complex as any contemporary programming language. Its syntactical restrictions meant the compiler (which translates the syntax to machine-readable instructions) is only 1 kilobyte, compared to C++ compilers, which were generally hundreds of kilobytes. Exploring further, Chris Pressey wondered why code always had to be written from left to right and created Befunge in 1993. “It took the idea of the single-character commands and said if you’re going to have commands that are only one letter, why do we need to read it left to right?” says Temkin. “Why can’t we have code move a little bit to the right, then turn up, and then go off the page and come up off the bottom and so on?“ So Pressey decided to create a language that would be the most difficult language to build a compiler for,” Temkin continues. “I believe that was the original idea, allowing the code to turn in different directions and flow across the space.” Much of the mid-90s trend coincided with the rise of shareware, the demo scene, and the nascent days of the Internet, when it was necessary to program everything to be as small as possible to share it. “There’s definitely a lot of crossover between these things because they involve this kind of artistry, but also a kind of technical wizardry in showing, ‘Look how much I can do with this really minimal program,’” Temkin says. “What really interested me in esoteric languages specifically is the way that it’s community-based,” Temkin says. “If you make a language, it’s an invitation for other people to use the language. And when you make a language and somebody else shows you what’s possible to do with your language or discovers something new about it that you couldn’t have foreseen on your own.” One of Temkin’s esoteric languages uses a cuneiform script.Daniel Temkin You can play with many of Daniel’s languages on his website, as well as the Esoteric Languages Wiki, which raises the question: In the modern connected age, how does one create a shareable esoteric language? “It’s something that I’ve changed my attitude about over the years,” says Temkin. “Early on, I thought I had to write a serious compiler for my language. But now I think what’s really important is that people across different platforms and spaces can use it. So in general, I try to write everything in JavaScript when I can and have it run in the browser. If I don’t, then I tend to stick with Python as it has the largest user base. But I do get a little bored with those two languages.” “I realize there’s a certain irony there,” Temkin adds.