Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
7
In partnership with Google, the Computer History Museum has released the source code to AlexNet, the neural network that in 2012 kickstarted today’s prevailing approach to AI. The source code is available as open source on CHM’s GitHub page. What Is AlexNet? AlexNet is an artificial neural network created to recognize the contents of photographic images. It was developed in 2012 by then University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever and their faculty advisor, Geoffrey Hinton. The Origins of Deep Learning Hinton is regarded as one of the fathers of deep learning, the type of artificial intelligence that uses neural networks and is the foundation of today’s mainstream AI. Simple three-layer neural networks with only one layer of adaptive weights were first built in the late 1950s—most notably by Cornell researcher Frank Rosenblatt—but they were found to have limitations. [This explainer gives more details on how neural networks work.] In particular,...
a week ago

Improve your reading experience

Logged in users get linked directly to articles resulting in a better reading experience. Please login for free, it takes less than 1 minute.

More from IEEE Spectrum

The Tiniest Flying Robot Soars Thanks to Magnets

A new prototype is laying claim to the title of smallest, lightest untethered flying robot. At less than a centimeter in wingspan, the wirelessly powered robot is currently very limited in how far it can travel away from the magnetic fields that drive its flight. However, the scientists who developed it suggest there are ways to boost its range, which could lead to potential applications such as search and rescue operations, inspecting damaged machinery in industrial settings, and even plant pollination. One strategy to shrink flying robots involves removing their batteries and supplying them electricity using tethers. However, tethered flying robots face problems operating freely in complex environments. This has led some researchers to explore wireless methods of powering robot flight. “The dream was to make flying robots to fly anywhere and anytime without using an electrical wire for the power source,” says Liwei Lin, a professor of mechanical engineering at University of California at Berkeley. Lin and his fellow researchers detailed their findings in Science Advances. 3D-Printed Flying Robot Design Each flying robot has a 3D-printed body that consists of a propeller with four blades. This rotor is encircled by a ring that helps the robot stay balanced during flight. On top of each body are two tiny permanent magnets. All in all, the insect-scale prototypes have wingspans as small as 9.4 millimeters and weigh as little as 21 milligrams. Previously, the smallest reported flying robot, either tethered or untethered, was 28 millimeters wide. When exposed to an external alternating magnetic field, the robots spin and fly without tethers. The lowest magnetic field strength needed to maintain flight is 3.1 millitesla. (In comparison, a refrigerator magnet has a strength of about 10 mT.) When the applied magnetic field alternates with a frequency of 310 hertz, the robots can hover. At 340 Hz, they accelerate upward. The researchers could steer the robots laterally by adjusting the applied magnetic fields. The robots could also right themselves after collisions to stay airborne without complex sensing or controlling electronics, as long as the impacts were not too large. Experiments show the lift force the robots generate can exceed their weight by 14 percent, to help them carry payloads. For instance, a prototype that’s 20.5 millimeters wide and weighing 162.4 milligrams could carry an infrared sensor weighing 110 mg to scan its environment. The robots proved efficient at converting the energy given them into lift force—better than nearly all other reported flying robots, tethered or untethered, and also better than fruit flies and hummingbirds. Currently the maximum operating range of these prototypes is about 10 centimeters away from the magnetic coils. One way to extend the operating range of these robots is to increase the magnetic field strength they experience tenfold by adding more coils, optimizing the configuration of these coils, and using beamforming coils, Lin notes. Such developments could allow the robots to fly up to a meter away from the magnetic coils. The scientists could also miniaturize the robots even further. This would make them lighter, and so reduce the magnetic field strength they need for propulsion. “It could be possible to drive micro flying robots using electromagnetic waves such as those in radio or cell phone transmission signals,” Lin says. Future research could also place devices that can convert magnetic energy to electricity onboard the robots to power electronic components, the researchers add.

2 days ago 5 votes
Video Friday: Watch this 3D-Printed Robot Escape

Your weekly selection of awesome robot videos Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, NETHERLANDS Enjoy today’s videos! This robot can walk, without electronics, and only with the addition of a cartridge of compressed gas, right off the 3D-printer. It can also be printed in one go, from one material. Researchers from the University of California San Diego and BASF, describe how they developed the robot in an advanced online publication in the journal Advanced Intelligent Systems. They used the simplest technology available: a desktop 3D-printer and an off-the-shelf printing material. This design approach is not only robust, it is also cheap—each robot costs about $20 to manufacture. And details! [ Paper ] via [ University of California San Diego ] Why do you want a humanoid robot to walk like a human? So that it doesn’t look weird, I guess, but it’s hard to imagine that a system that doesn’t have the same arrangement of joints and muscles that we do will move optimally by just trying to mimic us. [ Figure ] I don’t know how it manages it, but this little soft robotic worm somehow moves with an incredible amount of personality. Soft actuators are critical for enabling soft robots, medical devices, and haptic systems. Many soft actuators, however, require power to hold a configuration and rely on hard circuitry for control, limiting their potential applications. In this work, the first soft electromagnetic system is demonstrated for externally-controlled bistable actuation or self-regulated astable oscillation. [ Paper ] via [ Georgia Tech ] Thanks, Ellen! A 180-degree pelvis rotation would put the “break” in “breakdancing” if this were a human doing it. [ Boston Dynamics ] My colleagues were impressed by this cooking robot, but that may be because journalists are always impressed by free food. [ Posha ] This is our latest work about a hybrid aerial-terrestrial quadruped robot called SPIDAR, which shows unique and complex locomotion styles in both aerial and terrestrial domains including thrust-assisted crawling motion. This work has been presented in the International Symposium of Robotics Research (ISRR) 2024. [ Paper ] via [ Dragon Lab ] Thanks, Moju! This fresh, newly captured video from Unitree’s testing grounds showcases the breakneck speed of humanoid intelligence advancement. Every day brings something thrilling! [ Unitree ] There should be more robots that you can ride around on. [ AgileX Robotics ] There should be more robots that wear hats at work. [ Ugo ] iRobot, who pioneered giant docks for robot vacuums, is now moving away from giant docks for robot vacuums. [ iRobot ] There’s a famous experiment where if you put a dead fish in current, it starts swimming, just because of its biomechanical design. Somehow, you can do the same thing with an unactuated quadruped robot on a treadmill. [ Delft University of Technology ] Mush! Narrowly! [ Hybrid Robotics ] It’s freaking me out a little bit that this couple is apparently wandering around a huge mall that is populated only by robots and zero other humans. [ MagicLab ] I’m trying, I really am, but the yellow is just not working for me. [ Kepler ] By having Stretch take on the physically demanding task of unloading trailers stacked floor to ceiling with boxes, Gap Inc has reduced injuries, lowered turnover, and watched employees get excited about automation intended to keep them safe. [ Boston Dynamics ] Since arriving at Mars in 2012, NASA’s Curiosity rover has been ingesting samples of Martian rock, soil, and air to better understand the past and present habitability of the Red Planet. Of particular interest to its search are organic molecules: the building blocks of life. Now, Curiosity’s onboard chemistry lab has detected long-chain hydrocarbons in a mudstone called “Cumberland,” the largest organics yet discovered on Mars. [ NASA ] This University of Toronto Robotics Institute Seminar is from Sergey Levine at UC Berkeley, on Robotics Foundation Models. General-purpose pretrained models have transformed natural language processing, computer vision, and other fields. In principle, such approaches should be ideal in robotics: since gathering large amounts of data for any given robotic platform and application is likely to be difficult, general pretrained models that provide broad capabilities present an ideal recipe to enable robotic learning at scale for real-world applications. From the perspective of general AI research, such approaches also offer a promising and intriguing approach to some of the grandest AI challenges: if large-scale training on embodied experience can provide diverse physical capabilities, this would shed light not only on the practical questions around designing broadly capable robots, but the foundations of situated problem-solving, physical understanding, and decision making. However, realizing this potential requires handling a number of challenging obstacles. What data shall we use to train robotic foundation models? What will be the training objective? How should alignment or post-training be done? In this talk, I will discuss how we can approach some of these challenges. [ University of Toronto ]

2 days ago 5 votes
Video Friday: Meet Mech, a Superhumanoid Robot

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! Every time you see a humanoid demo in a warehouse or factory, ask yourself: Would a “superhumanoid” like this actually be a better answer? [ Dexterity ] The only reason that this is the second video in Video Friday this week, and not the first, is because you’ve almost certainly already seen it. This is a collaboration between the Robotics and AI Institute and Boston Dynamics, and RAI has its own video, which is slightly different: - YouTube [ Boston Dynamics ] via [ RAI ] Well this just looks a little bit like magic. [ University of Pennsylvania Sung Robotics Lab ] After hours of dance battles with professional choreographers (yes, real human dancers!), PM01 now nails every iconic move from Kung Fu Hustle. [ EngineAI ] Sanctuary AI has demonstrated industry-leading sim-to-real transfer of learned dexterous manipulation policies for our unique, high degree-of-freedom, high strength, and high speed hydraulic hands. [ Sanctuary AI ] This video is “introducing BotQ, Figure’s new high-volume manufacturing facility for humanoid robots,” but I just see some injection molding and finishing of a few plastic parts. [ Figure ] DEEP Robotics recently showcased its “One-Touch Navigation” feature, enhancing the intelligent control experience of its robotic dog. This feature offers two modes: map-based point selection and navigation and video-based point navigation, designed for open terrains and confined spaces respectively. By simply typing on a tablet screen or selecting a point in the video feed, the robotic dog can autonomously navigate to the target point, automatically planning its path and intelligently avoiding obstacles, significantly improving traversal efficiency. What’s in the bags, though? [ Deep Robotics ] This hurts my knees to watch, in a few different ways. [ Unitree ] Why the recent obsession with two legs when instead robots could have six? So much cuter! [ Jizai ] via [ RobotStart ] The world must know: who killed Mini-Duck? [ Pollen ] Seven hours of Digit robots at work at ProMat. And there are two more days of these livestreams if you need more! [ Agility ]

a week ago 9 votes
IEEE Recognizes Itaipu Dam’s Engineering Achievements

Technology should benefit humanity. One of the most remarkable examples of technology’s potential to provide enduring benefits is the Itaipu Hydroelectric Dam, a massive binational energy project between Brazil and Paraguay. Built on the Paraná River, which forms part of the border between the two nations, Itaipu transformed a once-contested hydroelectric resource into a shared engine of economic progress. The power plant has held many records. For decades, it was the world’s largest hydroelectric facility; the dam spans the river’s 7.9-kilometer width and reaches a height of 196 meters. Itaipu was also the first hydropower plant to generate more than 100 terawatt hours of electricity in a year. To acknowledge Itaipu’s monumental engineering achievement, on 4 March the dam was recognized as an IEEE Milestone during a ceremony in Hernandarias, Paraguay. The ceremony commemorated the project’s impact on engineering and energy production. Itaipu’s massive scale By the late 1960s, Brazil and Paraguay recognized the Paraná River’s untapped hydroelectric potential, according to the Global Infrastructure Hub. Brazil, which was undergoing rapid industrialization, sought a stable, renewable energy source to reduce its dependence on fossil fuels. Meanwhile, Paraguay, lacking the financial resources to construct a gigawatt-scale hydroelectric facility independently, entered into a treaty with Brazil in 1973. The agreement granted both countries equal ownership of the dam and its power generation. Construction began in 1975 and was completed in 1984, costing US $19.6 billion. The scale of the project was staggering. Engineers excavated 50 million cubic meters of earth and rock, poured 12.3 million cubic meters of concrete, and used enough iron and steel to construct 380 Eiffel Towers. Itaipu was designed for continuous expansion. It initially launched with two 700-megawatt turbine units, providing 1.4 gigawatts of capacity. By 1991, the power plant reached its planned 12.6 GW capacity. In 2006 and 2007, it was expanded to 14 GW with the addition of two more units, for a total of 20. Although China’s 22.5-GW Three Gorges Dam, on the Yangtze River near the city of Yichang, surpassed Itaipu’s capacity in 2012, the South American dam remains one of the world’s most productive hydroelectric facilities. On average, Itaipu generates around 90 terawatt-hours of electricity annually. It set a record by generating 103.1 TWh in 2016 (surpassed in 2020 by Three Gorges’ 111.8-TWh output). To put 100 TWh into perspective, a power plant would need to burn approximately 50 million tonnes of coal to produce the same amount of energy, according to the U.S. Energy Information Administration. By harnessing 62,200 cubic meters of river water per second, Itaipu prevents the release of nearly 100 million tonnes of carbon dioxide each year. During its 40-year lifetime, the dam has generated more than 3,000 TWh of electricity, meeting nearly 90 percent of Paraguay’s energy needs and contributing roughly 10 percent of Brazil’s electricity supply. Itaipu’s legacy endures as a testament to the benefits of international cooperation and sustainable energy and to the power of engineering to shape the future. IEEE recognition for Itaipu The IEEE Milestone commemorative plaque, now displayed in the dam’s visitor center, highlights Itaipu’s role as a world leader in hydroelectric power generation. It reads: “Itaipu power plant construction began in 1975 as a joint Brazil-Paraguay venture. When power generation started in 1984, Itaipu set a world record for the single largest installed hydroelectric capacity (14 GW). For at least three decades, Itaipu produced more electricity annually than any other hydroelectric project. Linking power plants, substations, and transmission lines in both Brazil and Paraguay, Itaipu’s system provided reliable, affordable energy to consumers and industry.” Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments worldwide. The IEEE Paraguay Section sponsored the nomination.

a week ago 8 votes

More in science

Science updates - brief items

Here are a couple of neat papers that I came across in the last week.  (Planning to write something about multiferroics as well, once I have a bit of time.) The idea of directly extracting useful energy from the rotation of the earth sounds like something out of an H. G. Wells novel.  At a rough estimate (and it's impressive to me that AI tools are now able to provide a convincing step-by-step calculation of this; I tried w/ gemini.google.com) the rotational kinetic energy of the earth is about \(2.6 \times 10^{29}\) J.  The tricky bit is, how do you get at it?  You might imagine constructing some kind of big space-based pick-up coil and getting some inductive voltage generation as the earth rotates its magnetic field past the coil.  Intuitively, though, it seems like while sitting on the (rotating) earth, you should in some sense be comoving with respect to the local magnetic field, so it shouldn't be possible to do anything clever that way.  It turns out, though, that Lorentz forces still apply when moving a wire through the axially symmetric parts of the earth's field.  This has some conceptual contact with Faraday's dc electric generator.   With the right choice of geometry and materials, it is possible to use such an approach to extract some (tiny at the moment) power.  For the theory proposal, see here.  For an experimental demonstration, using thermoelectric effects as a way to measure this (and confirm that the orientation of the cylindrical shell has the expected effect), see here.  I need to read this more closely to decide if I really understand the nuances of how it works. On a completely different note, this paper came out on Friday.  (Full disclosure:  The PI is my former postdoc and the second author was one of my students.)  It's an impressive technical achievement.  We are used to the fact that usually macroscopic objects don't show signatures of quantum interference.  Inelastic interactions of the object with its environment effectively suppress quantum interference effects on some time scale (and therefore some distance scale).  Small molecules are expected to still show electronic quantum effects at room temperature, since they are tiny and their electronic levels are widely spaced, and here is a review of what this could do in electronic measurements.  Quantum interference effects should also be possible in molecular vibrations at room temperature, and they could manifest themselves through the vibrational thermal conduction through single molecules, as considered theoretically here.  This experimental paper does a bridge measurement to compare the thermal transport between a single-molecule-containing junction between a tip and a surface, and an empty (farther spaced) twin tip-surface geometry.  They argue that they see differences between two kinds of molecules that originate from such quantum interference effects. As for more global issues about the US research climate, there will be more announcements soon about reductions in force and the forthcoming presidential budget request.  (Here is an online petition regarding the plan to shutter the NIST atomic spectroscopy group.)  Please pay attention to these issues, and if you're a US citizen, I urge you to contact your legislators and make your voice heard.

an hour ago 2 votes
Some Doodles I'm Proud of -- The Capping Algorithm for Embedded Graphs

This will be a really quick one! Over the last two weeks I’ve been finishing up a big project to make DOIs for all the papers published in TAC, and my code takes a while to run. So while testing I would hit “go” and have like 10 minutes to kill… which means it’s time to start answering questions on mse again! I haven’t been very active recently because I’ve been spending a lot of time on research and music, but it’s been nice to get back into it. I’m especially proud of a few recent answers, so I think I might quickly turn them into blog posts like I did in the old days! In this post, we’ll try to understand the Capping Algorithm which turns a graph embedded in a surface into a particularly nice embedding where the graph cuts the surface into disks. I drew some pretty pictures to explain what’s going on, and I’m really pleased with how they turned out! So, to start, what’s this “capping algorithm” all about? Say you have a (finite) graph $G$ and you want to know what surfaces it embeds into. For instance planar graphs are those which embed in $\mathbb{R}^2$ (equivalently $S^2$), and owners of this novelty mug know that even the famously nonplanar $K_{3,3}$ embeds in a torus1: Obviously every graph embeds into some high genus surface – just add an extra handle for every edge of the graph, and the edges can’t possibly cross each other! Also once you can embed in some surface you can obviously embed in higher genus surfaces by just adding handles you don’t use. This leads to two obvious “extremal” questions: What is the smallest genus surface which $G$ embeds into? What is the largest genus surface which $G$ embeds into where all the handles are necessary? Note we can check if a handle is “necessary” or not by cutting our surface along the edges of our graph. If the handle doesn’t get cut apart then our graph $G$ must not have used it! This leads to the precise definition: Defn: A $2$-Cell Embedding of $G$ in a surface $S$ is an embedding so that all the conected components of $S \setminus G$ are 2-cells (read: homeomorphic to disks). Then the “largest genus surface where all the handles are necessary” amounts to looking for the largest genus surface where $G$ admits a 2-cell embedding! But in fact, we can restrict attention to 2-cell embeddings in the smallest genus case too, since if we randomly embed $G$ into a surface, there’s an algorithm which only ever decreases the genus and outputs a 2-cell embedding! So if $S$ is the minimal genus surface that $G$ embeds in, we can run this algorithm to get a 2-cell embedding of $G$ in $S$. And what is that algorithm? It’s called Capping, see for instance Minimal Embeddings and the Genus of a Graph by J.W.T. Youngs. The idea is to cut your surface along $G$, look for anything that isn’t a disk, and “cap it off” to make it a disk. Then you repeat until everything in a disk, and you stop. The other day somebody on mse asked about this algorithm, and I had a lot of fun drawing some pictures to show what’s going on2! This post basically exists because I was really proud of how these drawings turned out, and wanted to share them somewhere more permanent, haha. Anyways, on with the show! We’ll start with an embedding of a graph 𝐺 (shown in purple) in a genus 2 surface: we’ll cut it into pieces along $G$, and choose one of our non-disk pieces (call it $S$) to futz with: Now we choose3 a big submanifold $T \subseteq S$ which leaves behind cylinders when we remove it. Pay attention to the boundary components of $T$, called $J_1$ and $J_2$ below, since that’s where we’ll attach a disk to “cap off” where $T$ used to be We glue all our pieces back together, but remove the interior of $T$ and then, as promised “cap off” the boundary components $J_1$ and $J_2$ with disks. Note that the genus decreased when we did this! It used to be genus 2, and now we’re genus 1! Note also that $G$ still embeds into our new surface: Let’s squish it around to a homeomorphic picture, then do the same process a second time! But faster now that we know what’s going on: At this point, we can try to do it again, but we’ll find that removing $G$ cuts our surface into disks: This tells us the algorithm is done, since we’ve successfully produced a 2-cell embedding of $G$ ^_^. Wow that was a really quick one today! Start to finish in under an hour, but it makes sense since I’d already drawn the pictures and spent some time doing research for my answer the other day. Maybe I’ll go play flute for a bit. Thanks for hanging out, all! Stay safe, and see you soon ^_^ This photo of a solution was taken from games4life.co.uk ↩ You know it’s funny, even over the course of drawing just these pictures the other day I feel like I improved a lot… I have half a mind to redraw all these pictures even better, but that would defeat the point of a quick post, so I’ll stay strong! ↩ It’s possible there’s a unique “best” choice of $T$ and I’m just inexperienced with this algorithm. I hadn’t heard of it until I wrote this answer, so there’s a lot of details that I’m fuzzy on. If you happen to know a lot about this stuff, definitely let me know more! ↩

23 hours ago 2 votes
The High Cost of Quantum Randomness Is Dropping

Randomness is essential to some research, but it’s always been prohibitively complicated. Now, we can use “pseudorandomness” instead. The post The High Cost of Quantum Randomness Is Dropping first appeared on Quanta Magazine

2 days ago 3 votes
H&M Will Use Digital Twins

The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds. Regarding the H&M announcement specifically, they said they will use digital twins of models that […] The post H&M Will Use Digital Twins first appeared on NeuroLogica Blog.

2 days ago 2 votes
A walk down Victoria Street

London’s mid-rise architecture

2 days ago 2 votes