More from IEEE Spectrum
A new prototype is laying claim to the title of smallest, lightest untethered flying robot. At less than a centimeter in wingspan, the wirelessly powered robot is currently very limited in how far it can travel away from the magnetic fields that drive its flight. However, the scientists who developed it suggest there are ways to boost its range, which could lead to potential applications such as search and rescue operations, inspecting damaged machinery in industrial settings, and even plant pollination. One strategy to shrink flying robots involves removing their batteries and supplying them electricity using tethers. However, tethered flying robots face problems operating freely in complex environments. This has led some researchers to explore wireless methods of powering robot flight. “The dream was to make flying robots to fly anywhere and anytime without using an electrical wire for the power source,” says Liwei Lin, a professor of mechanical engineering at University of California at Berkeley. Lin and his fellow researchers detailed their findings in Science Advances. 3D-Printed Flying Robot Design Each flying robot has a 3D-printed body that consists of a propeller with four blades. This rotor is encircled by a ring that helps the robot stay balanced during flight. On top of each body are two tiny permanent magnets. All in all, the insect-scale prototypes have wingspans as small as 9.4 millimeters and weigh as little as 21 milligrams. Previously, the smallest reported flying robot, either tethered or untethered, was 28 millimeters wide. When exposed to an external alternating magnetic field, the robots spin and fly without tethers. The lowest magnetic field strength needed to maintain flight is 3.1 millitesla. (In comparison, a refrigerator magnet has a strength of about 10 mT.) When the applied magnetic field alternates with a frequency of 310 hertz, the robots can hover. At 340 Hz, they accelerate upward. The researchers could steer the robots laterally by adjusting the applied magnetic fields. The robots could also right themselves after collisions to stay airborne without complex sensing or controlling electronics, as long as the impacts were not too large. Experiments show the lift force the robots generate can exceed their weight by 14 percent, to help them carry payloads. For instance, a prototype that’s 20.5 millimeters wide and weighing 162.4 milligrams could carry an infrared sensor weighing 110 mg to scan its environment. The robots proved efficient at converting the energy given them into lift force—better than nearly all other reported flying robots, tethered or untethered, and also better than fruit flies and hummingbirds. Currently the maximum operating range of these prototypes is about 10 centimeters away from the magnetic coils. One way to extend the operating range of these robots is to increase the magnetic field strength they experience tenfold by adding more coils, optimizing the configuration of these coils, and using beamforming coils, Lin notes. Such developments could allow the robots to fly up to a meter away from the magnetic coils. The scientists could also miniaturize the robots even further. This would make them lighter, and so reduce the magnetic field strength they need for propulsion. “It could be possible to drive micro flying robots using electromagnetic waves such as those in radio or cell phone transmission signals,” Lin says. Future research could also place devices that can convert magnetic energy to electricity onboard the robots to power electronic components, the researchers add.
Your weekly selection of awesome robot videos Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL RO-MAN 2025: 25–29 August 2025, EINDHOVEN, NETHERLANDS Enjoy today’s videos! This robot can walk, without electronics, and only with the addition of a cartridge of compressed gas, right off the 3D-printer. It can also be printed in one go, from one material. Researchers from the University of California San Diego and BASF, describe how they developed the robot in an advanced online publication in the journal Advanced Intelligent Systems. They used the simplest technology available: a desktop 3D-printer and an off-the-shelf printing material. This design approach is not only robust, it is also cheap—each robot costs about $20 to manufacture. And details! [ Paper ] via [ University of California San Diego ] Why do you want a humanoid robot to walk like a human? So that it doesn’t look weird, I guess, but it’s hard to imagine that a system that doesn’t have the same arrangement of joints and muscles that we do will move optimally by just trying to mimic us. [ Figure ] I don’t know how it manages it, but this little soft robotic worm somehow moves with an incredible amount of personality. Soft actuators are critical for enabling soft robots, medical devices, and haptic systems. Many soft actuators, however, require power to hold a configuration and rely on hard circuitry for control, limiting their potential applications. In this work, the first soft electromagnetic system is demonstrated for externally-controlled bistable actuation or self-regulated astable oscillation. [ Paper ] via [ Georgia Tech ] Thanks, Ellen! A 180-degree pelvis rotation would put the “break” in “breakdancing” if this were a human doing it. [ Boston Dynamics ] My colleagues were impressed by this cooking robot, but that may be because journalists are always impressed by free food. [ Posha ] This is our latest work about a hybrid aerial-terrestrial quadruped robot called SPIDAR, which shows unique and complex locomotion styles in both aerial and terrestrial domains including thrust-assisted crawling motion. This work has been presented in the International Symposium of Robotics Research (ISRR) 2024. [ Paper ] via [ Dragon Lab ] Thanks, Moju! This fresh, newly captured video from Unitree’s testing grounds showcases the breakneck speed of humanoid intelligence advancement. Every day brings something thrilling! [ Unitree ] There should be more robots that you can ride around on. [ AgileX Robotics ] There should be more robots that wear hats at work. [ Ugo ] iRobot, who pioneered giant docks for robot vacuums, is now moving away from giant docks for robot vacuums. [ iRobot ] There’s a famous experiment where if you put a dead fish in current, it starts swimming, just because of its biomechanical design. Somehow, you can do the same thing with an unactuated quadruped robot on a treadmill. [ Delft University of Technology ] Mush! Narrowly! [ Hybrid Robotics ] It’s freaking me out a little bit that this couple is apparently wandering around a huge mall that is populated only by robots and zero other humans. [ MagicLab ] I’m trying, I really am, but the yellow is just not working for me. [ Kepler ] By having Stretch take on the physically demanding task of unloading trailers stacked floor to ceiling with boxes, Gap Inc has reduced injuries, lowered turnover, and watched employees get excited about automation intended to keep them safe. [ Boston Dynamics ] Since arriving at Mars in 2012, NASA’s Curiosity rover has been ingesting samples of Martian rock, soil, and air to better understand the past and present habitability of the Red Planet. Of particular interest to its search are organic molecules: the building blocks of life. Now, Curiosity’s onboard chemistry lab has detected long-chain hydrocarbons in a mudstone called “Cumberland,” the largest organics yet discovered on Mars. [ NASA ] This University of Toronto Robotics Institute Seminar is from Sergey Levine at UC Berkeley, on Robotics Foundation Models. General-purpose pretrained models have transformed natural language processing, computer vision, and other fields. In principle, such approaches should be ideal in robotics: since gathering large amounts of data for any given robotic platform and application is likely to be difficult, general pretrained models that provide broad capabilities present an ideal recipe to enable robotic learning at scale for real-world applications. From the perspective of general AI research, such approaches also offer a promising and intriguing approach to some of the grandest AI challenges: if large-scale training on embodied experience can provide diverse physical capabilities, this would shed light not only on the practical questions around designing broadly capable robots, but the foundations of situated problem-solving, physical understanding, and decision making. However, realizing this potential requires handling a number of challenging obstacles. What data shall we use to train robotic foundation models? What will be the training objective? How should alignment or post-training be done? In this talk, I will discuss how we can approach some of these challenges. [ University of Toronto ]
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES ETH Robotics Summer School: 21–27 June 2025, GENEVA IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! Every time you see a humanoid demo in a warehouse or factory, ask yourself: Would a “superhumanoid” like this actually be a better answer? [ Dexterity ] The only reason that this is the second video in Video Friday this week, and not the first, is because you’ve almost certainly already seen it. This is a collaboration between the Robotics and AI Institute and Boston Dynamics, and RAI has its own video, which is slightly different: - YouTube [ Boston Dynamics ] via [ RAI ] Well this just looks a little bit like magic. [ University of Pennsylvania Sung Robotics Lab ] After hours of dance battles with professional choreographers (yes, real human dancers!), PM01 now nails every iconic move from Kung Fu Hustle. [ EngineAI ] Sanctuary AI has demonstrated industry-leading sim-to-real transfer of learned dexterous manipulation policies for our unique, high degree-of-freedom, high strength, and high speed hydraulic hands. [ Sanctuary AI ] This video is “introducing BotQ, Figure’s new high-volume manufacturing facility for humanoid robots,” but I just see some injection molding and finishing of a few plastic parts. [ Figure ] DEEP Robotics recently showcased its “One-Touch Navigation” feature, enhancing the intelligent control experience of its robotic dog. This feature offers two modes: map-based point selection and navigation and video-based point navigation, designed for open terrains and confined spaces respectively. By simply typing on a tablet screen or selecting a point in the video feed, the robotic dog can autonomously navigate to the target point, automatically planning its path and intelligently avoiding obstacles, significantly improving traversal efficiency. What’s in the bags, though? [ Deep Robotics ] This hurts my knees to watch, in a few different ways. [ Unitree ] Why the recent obsession with two legs when instead robots could have six? So much cuter! [ Jizai ] via [ RobotStart ] The world must know: who killed Mini-Duck? [ Pollen ] Seven hours of Digit robots at work at ProMat. And there are two more days of these livestreams if you need more! [ Agility ]
In partnership with Google, the Computer History Museum has released the source code to AlexNet, the neural network that in 2012 kickstarted today’s prevailing approach to AI. The source code is available as open source on CHM’s GitHub page. What Is AlexNet? AlexNet is an artificial neural network created to recognize the contents of photographic images. It was developed in 2012 by then University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever and their faculty advisor, Geoffrey Hinton. The Origins of Deep Learning Hinton is regarded as one of the fathers of deep learning, the type of artificial intelligence that uses neural networks and is the foundation of today’s mainstream AI. Simple three-layer neural networks with only one layer of adaptive weights were first built in the late 1950s—most notably by Cornell researcher Frank Rosenblatt—but they were found to have limitations. [This explainer gives more details on how neural networks work.] In particular, researchers needed networks with more than one layer of adaptive weights, but there wasn’t a good way to train them. By the early 1970s, neural networks had been largely rejected by AI researchers. Frank Rosenblatt [left, shown with Charles W. Wightman] developed the first artificial neural network, the perceptron, in 1957.Division of Rare and Manuscript Collections/Cornell University Library In the 1980s, neural network research was revived outside the AI community by cognitive scientists at the University of California San Diego, under the new name of “connectionism.” After finishing his Ph.D. at the University of Edinburgh in 1978, Hinton had become a postdoctoral fellow at UCSD, where he collaborated with David Rumelhart and Ronald Williams. The three rediscovered the backpropagation algorithm for training neural networks, and in 1986 they published two papers showing that it enabled neural networks to learn multiple layers of features for language and vision tasks. Backpropagation, which is foundational to deep learning today, uses the difference between the current output and the desired output of the network to adjust the weights in each layer, from the output layer backward to the input layer. University of Toronto. Away from the centers of traditional AI, Hinton’s work and those of his graduate students made Toronto a center of deep learning research over the coming decades. One postdoctoral student of Hinton’s was Yann LeCun, now chief scientist at Meta. While working in Toronto, LeCun showed that when backpropagation was used in “convolutional” neural networks, they became very good at recognizing handwritten numbers. ImageNet and GPUs Despite these advances, neural networks could not consistently outperform other types of machine learning algorithms. They needed two developments from outside of AI to pave the way. The first was the emergence of vastly larger amounts of data for training, made available through the Web. The second was enough computational power to perform this training, in the form of 3D graphics chips, known as GPUs. By 2012, the time was ripe for AlexNet. Fei-Fei Li’s ImageNet image dataset, completed in 2009, was pivotal in training AlexNet. Here, Li [right] talks with Tom Kalil at the Computer History Museum.Douglas Fairbairn/Computer History Museum The data needed to train AlexNet was found in ImageNet, a project started and led by Stanford professor Fei-Fei Li. Beginning in 2006, and against conventional wisdom, Li envisioned a dataset of images covering every noun in the English language. She and her graduate students began collecting images found on the Internet and classifying them using a taxonomy provided by WordNet, a database of words and their relationships to each other. Given the enormity of their task, Li and her collaborators ultimately crowdsourced the task of labeling images to gig workers, using Amazon’s Mechanical Turk platform. competition in 2010 to encourage research teams to improve their image recognition algorithms. But over the next two years, the best systems only made marginal improvements. NVIDIA, cofounded by CEO Jensen Huang, had led the way in the 2000s in making GPUs more generalizable and programmable for applications beyond 3D graphics, especially with the CUDA programming system released in 2007. Both ImageNet and CUDA were, like neural networks themselves, fairly niche developments that were waiting for the right circumstances to shine. In 2012, AlexNet brought together these elements—deep neural networks, big datasets, and GPUs— for the first time, with pathbreaking results. Each of these needed the other. How AlexNet Was Created By the late 2000s, Hinton’s grad students at the University of Toronto were beginning to use GPUs to train neural networks for both image and speech recognition. Their first successes came in speech recognition, but success in image recognition would point to deep learning as a possible general-purpose solution to AI. One student, Ilya Sutskever, believed that the performance of neural networks would scale with the amount of data available, and the arrival of ImageNet provided the opportunity. In 2011, Sutskever convinced fellow grad student Alex Krizhevsky, who had a keen ability to wring maximum performance out of GPUs, to train a convolutional neural network for ImageNet, with Hinton serving as principal investigator. AlexNet used NVIDIA GPUs running CUDA code trained on the ImageNet dataset. NVIDIA CEO Jensen Huang was named a 2024 CHM Fellow for his contributions to computer graphics chips and AI.Douglas Fairbairn/Computer History Museum Krizhevsky had already written CUDA code for a convolutional neural network using NVIDIA GPUs, called cuda-convnet, trained on the much smaller CIFAR-10 image dataset. He extended cuda-convnet with support for multiple GPUs and other features and retrained it on ImageNet. The training was done on a computer with two NVIDIA cards in Krizhevsky’s bedroom at his parents’ house. Over the course of the next year, he constantly tweaked the network’s parameters and retrained it until it achieved performance superior to its competitors. The network would ultimately be named AlexNet, after Krizhevsky. Geoff Hinton summed up the AlexNet project this way: “Ilya thought we should do it, Alex made it work, and I got the Nobel prize.” Krizhevsky, Sutskever, and Hinton wrote a paper on AlexNet that was published in the fall of 2012 and presented by Krizhevsky at a computer vision conference in Florence, Italy, in October. Veteran computer vision researchers weren’t convinced, but LeCun, who was at the meeting, pronounced it a turning point for AI. He was right. Before AlexNet, almost none of the leading computer vision papers used neural nets. After it, almost all of them would. synthesize believable human voices, beat champion Go players, and generate artwork, culminating with the release of ChatGPT in November 2022 by OpenAI, a company cofounded by Sutskever. Releasing the AlexNet Source Code In 2020, I reached out to Krizhevsky to ask about the possibility of allowing CHM to release the AlexNet source code, due to its historical significance. He connected me to Hinton, who was working at Google at the time. Google owned AlexNet, having acquired DNNresearch, the company owned by Hinton, Sutskever, and Krizhevsky. Hinton got the ball rolling by connecting CHM to the right team at Google. CHM worked with the Google team for five years to negotiate the release. The team also helped us identify the specific version of the AlexNet source code to release—there have been many versions of AlexNet over the years. There are other repositories of code called AlexNet on GitHub, but many of these are re-creations based on the famous paper, not the original code. CHM’s GitHub page. This post originally appeared on the blog of the Computer History Museum. Acknowledgments Special thanks to Geoffrey Hinton for providing his quote and reviewing the text, to Cade Metz and Alex Krizhevsky for additional clarifications, and to David Bieber and the rest of the team at Google for their work in securing the source code release. References Fei-Fei Li, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI. First edition, Flatiron Books, New York, 2023. Cade Metz, Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World. First edition, Penguin Random House, New York, 2022.
More in science
Here are a couple of neat papers that I came across in the last week. (Planning to write something about multiferroics as well, once I have a bit of time.) The idea of directly extracting useful energy from the rotation of the earth sounds like something out of an H. G. Wells novel. At a rough estimate (and it's impressive to me that AI tools are now able to provide a convincing step-by-step calculation of this; I tried w/ gemini.google.com) the rotational kinetic energy of the earth is about \(2.6 \times 10^{29}\) J. The tricky bit is, how do you get at it? You might imagine constructing some kind of big space-based pick-up coil and getting some inductive voltage generation as the earth rotates its magnetic field past the coil. Intuitively, though, it seems like while sitting on the (rotating) earth, you should in some sense be comoving with respect to the local magnetic field, so it shouldn't be possible to do anything clever that way. It turns out, though, that Lorentz forces still apply when moving a wire through the axially symmetric parts of the earth's field. This has some conceptual contact with Faraday's dc electric generator. With the right choice of geometry and materials, it is possible to use such an approach to extract some (tiny at the moment) power. For the theory proposal, see here. For an experimental demonstration, using thermoelectric effects as a way to measure this (and confirm that the orientation of the cylindrical shell has the expected effect), see here. I need to read this more closely to decide if I really understand the nuances of how it works. On a completely different note, this paper came out on Friday. (Full disclosure: The PI is my former postdoc and the second author was one of my students.) It's an impressive technical achievement. We are used to the fact that usually macroscopic objects don't show signatures of quantum interference. Inelastic interactions of the object with its environment effectively suppress quantum interference effects on some time scale (and therefore some distance scale). Small molecules are expected to still show electronic quantum effects at room temperature, since they are tiny and their electronic levels are widely spaced, and here is a review of what this could do in electronic measurements. Quantum interference effects should also be possible in molecular vibrations at room temperature, and they could manifest themselves through the vibrational thermal conduction through single molecules, as considered theoretically here. This experimental paper does a bridge measurement to compare the thermal transport between a single-molecule-containing junction between a tip and a surface, and an empty (farther spaced) twin tip-surface geometry. They argue that they see differences between two kinds of molecules that originate from such quantum interference effects. As for more global issues about the US research climate, there will be more announcements soon about reductions in force and the forthcoming presidential budget request. (Here is an online petition regarding the plan to shutter the NIST atomic spectroscopy group.) Please pay attention to these issues, and if you're a US citizen, I urge you to contact your legislators and make your voice heard.
This will be a really quick one! Over the last two weeks I’ve been finishing up a big project to make DOIs for all the papers published in TAC, and my code takes a while to run. So while testing I would hit “go” and have like 10 minutes to kill… which means it’s time to start answering questions on mse again! I haven’t been very active recently because I’ve been spending a lot of time on research and music, but it’s been nice to get back into it. I’m especially proud of a few recent answers, so I think I might quickly turn them into blog posts like I did in the old days! In this post, we’ll try to understand the Capping Algorithm which turns a graph embedded in a surface into a particularly nice embedding where the graph cuts the surface into disks. I drew some pretty pictures to explain what’s going on, and I’m really pleased with how they turned out! So, to start, what’s this “capping algorithm” all about? Say you have a (finite) graph $G$ and you want to know what surfaces it embeds into. For instance planar graphs are those which embed in $\mathbb{R}^2$ (equivalently $S^2$), and owners of this novelty mug know that even the famously nonplanar $K_{3,3}$ embeds in a torus1: Obviously every graph embeds into some high genus surface – just add an extra handle for every edge of the graph, and the edges can’t possibly cross each other! Also once you can embed in some surface you can obviously embed in higher genus surfaces by just adding handles you don’t use. This leads to two obvious “extremal” questions: What is the smallest genus surface which $G$ embeds into? What is the largest genus surface which $G$ embeds into where all the handles are necessary? Note we can check if a handle is “necessary” or not by cutting our surface along the edges of our graph. If the handle doesn’t get cut apart then our graph $G$ must not have used it! This leads to the precise definition: Defn: A $2$-Cell Embedding of $G$ in a surface $S$ is an embedding so that all the conected components of $S \setminus G$ are 2-cells (read: homeomorphic to disks). Then the “largest genus surface where all the handles are necessary” amounts to looking for the largest genus surface where $G$ admits a 2-cell embedding! But in fact, we can restrict attention to 2-cell embeddings in the smallest genus case too, since if we randomly embed $G$ into a surface, there’s an algorithm which only ever decreases the genus and outputs a 2-cell embedding! So if $S$ is the minimal genus surface that $G$ embeds in, we can run this algorithm to get a 2-cell embedding of $G$ in $S$. And what is that algorithm? It’s called Capping, see for instance Minimal Embeddings and the Genus of a Graph by J.W.T. Youngs. The idea is to cut your surface along $G$, look for anything that isn’t a disk, and “cap it off” to make it a disk. Then you repeat until everything in a disk, and you stop. The other day somebody on mse asked about this algorithm, and I had a lot of fun drawing some pictures to show what’s going on2! This post basically exists because I was really proud of how these drawings turned out, and wanted to share them somewhere more permanent, haha. Anyways, on with the show! We’ll start with an embedding of a graph 𝐺 (shown in purple) in a genus 2 surface: we’ll cut it into pieces along $G$, and choose one of our non-disk pieces (call it $S$) to futz with: Now we choose3 a big submanifold $T \subseteq S$ which leaves behind cylinders when we remove it. Pay attention to the boundary components of $T$, called $J_1$ and $J_2$ below, since that’s where we’ll attach a disk to “cap off” where $T$ used to be We glue all our pieces back together, but remove the interior of $T$ and then, as promised “cap off” the boundary components $J_1$ and $J_2$ with disks. Note that the genus decreased when we did this! It used to be genus 2, and now we’re genus 1! Note also that $G$ still embeds into our new surface: Let’s squish it around to a homeomorphic picture, then do the same process a second time! But faster now that we know what’s going on: At this point, we can try to do it again, but we’ll find that removing $G$ cuts our surface into disks: This tells us the algorithm is done, since we’ve successfully produced a 2-cell embedding of $G$ ^_^. Wow that was a really quick one today! Start to finish in under an hour, but it makes sense since I’d already drawn the pictures and spent some time doing research for my answer the other day. Maybe I’ll go play flute for a bit. Thanks for hanging out, all! Stay safe, and see you soon ^_^ This photo of a solution was taken from games4life.co.uk ↩ You know it’s funny, even over the course of drawing just these pictures the other day I feel like I improved a lot… I have half a mind to redraw all these pictures even better, but that would defeat the point of a quick post, so I’ll stay strong! ↩ It’s possible there’s a unique “best” choice of $T$ and I’m just inexperienced with this algorithm. I hadn’t heard of it until I wrote this answer, so there’s a lot of details that I’m fuzzy on. If you happen to know a lot about this stuff, definitely let me know more! ↩
Randomness is essential to some research, but it’s always been prohibitively complicated. Now, we can use “pseudorandomness” instead. The post The High Cost of Quantum Randomness Is Dropping first appeared on Quanta Magazine
The fashion retailer, H&M, has announced that they will start using AI generated digital twins of models in some of their advertising. This has sparked another round of discussion about the use of AI to replace artists of various kinds. Regarding the H&M announcement specifically, they said they will use digital twins of models that […] The post H&M Will Use Digital Twins first appeared on NeuroLogica Blog.