Full Width [alt+shift+f] Shortcuts [alt+shift+k]
Sign Up [alt+shift+s] Log In [alt+shift+l]
18
Finding it hard to get the perfect angle for your shot? PhotoBot can take the picture for you. Tell it what you want the photo to look like, and your robot photographer will present you with references to mimic. Pick your favorite, and PhotoBot—a robot arm with a camera—will adjust its position to match the reference and your picture. Chances are, you’ll like it better than your own photography. “It was a really fun project,” says Oliver Limoyo, one of the creators of PhotoBot. He enjoyed working at the intersection of several fields; human-robot interaction, large language models, and classical computer vision were all necessary to create the robot. Limoyo worked on PhotoBot while at Samsung, with his manager Jimmy Li. They were working on a project to have a robot take photographs but were struggling to find a good metric for aesthetics. Then they saw the Getty Image Challenge, where people recreated famous artwork at home during the COVID lockdown. The challenge gave Limoyo and Li...
2 months ago

More from IEEE Spectrum

The Starting Line for Self-Driving Cars

IEEE Spectrum reported at the time, it was “the motleyest assortment of vehicles assembled in one place since the filming of Mad Max 2: The Road Warrior.” Not a single entrant made it across the finish line. Some didn’t make it out of the parking lot. So it’s all the more remarkable that in the second DARPA Grand Challenge, just a year and a half later, five vehicles crossed the finish line. Stanley, developed by the Stanford Racing Team, eked out a first-place win to claim the $2 million purse. This modified Volkswagen Touareg [shown at top] completed the 212-kilometer course in 6 hours, 54 minutes. Carnegie Mellon’s Sandstorm and H1ghlander took second and third place, respectively, with times of 7:05 and 7:14. So how did the Grand Challenge go from a total bust to having five robust finishers in such a short period of time? It’s definitely a testament to what can be accomplished when engineers rise to a challenge. But the outcome of this one race was preceded by a much longer path of research, and that plus a little bit of luck are what ultimately led to victory. Before Stanley, there was Minerva Let’s back up to 1998, when computer scientist Sebastian Thrun was working at Carnegie Mellon and experimenting with a very different robot: a museum tour guide. For two weeks in the summer, Minerva, which looked a bit like a Dalek from “Doctor Who,” navigated an exhibit at the Smithsonian National Museum of American History. Its main task was to roll around and dispense nuggets of information about the displays. Minerva was a museum tour-guide robot developed by Sebastian Thrun. In an interview at the time, Thrun acknowledged that Minerva was there to entertain. But Minerva wasn’t just a people pleaser ; it was also a machine learning experiment. It had to learn where it could safely maneuver without taking out a visitor or a priceless artifact. Visitor, nonvisitor; display case, not-display case; open floor, not-open floor. It had to react to humans crossing in front of it in unpredictable ways. It had to learn to “see.” Fast-forward five years: Thrun transferred to Stanford in July 2003. Inspired by the first Grand Challenge, he organized the Stanford Racing Team with the aim of fielding a robotic car in the second competition. team’s paper.) A remote-control kill switch, which DARPA required on all vehicles, would deactivate the car before it could become a danger. About 100,000 lines of code did that and much more. Many of the other 2004 competitors regrouped to try again, and new ones entered the fray. In all, 195 teams applied to compete in the 2005 event. Teams included students, academics, industry experts, and hobbyists. In the early hours of 8 October, the finalists gathered for the big race. Each team had a staggered start time to help avoid congestion along the route. About two hours before a team’s start, DARPA gave them a CD containing approximately 3,000 GPS coordinates representing the course. Once the team hit go, it was hands off: The car had to drive itself without any human intervention. PBS’s NOVA produced an excellent episode on the 2004 and 2005 Grand Challenges that I highly recommend if you want to get a feel for the excitement, anticipation, disappointment, and triumph. In the 2005 Grand Challenge, Carnegie Mellon University’s H1ghlander was one of five autonomous cars to finish the race.Damian Dovarganes/AP H1ghlander held the pole position, having placed first in the qualifying rounds, followed by Stanley and Sandstorm. H1ghlander pulled ahead early and soon had a substantial lead. That’s where luck, or rather the lack of it, came in. What went wrong with H1ghlander remained a mystery, even after extensive postrace analysis. It wasn’t until 12 years after the race—and once again with a bit of luck—that CMU discovered the problem: Pressing on a small electronic filter between the engine control module and the fuel injector caused the engine to lose power and even turn off. Team members speculated that an accident a few weeks before the competition had damaged the filter. (To learn more about how CMU finally figured this out, see Spectrum Senior Editor Evan Ackerman’s 2017 story.) The Legacy of the DARPA Grand Challenge Regardless of who won the Grand Challenge, many success stories came out of the contest. A year and a half after the race, Thrun had already made great progress on adaptive cruise control and lane-keeping assistance, which is now readily available on many commercial vehicles. He then worked on Google’s Street View and its initial self-driving cars. CMU’s Red Team worked with NASA to develop rovers for potentially exploring the moon or distant planets. Closer to home, they helped develop self-propelled harvesters for the agricultural sector. Stanford team leader Sebastian Thrun holds a $2 million check, the prize for winning the 2005 Grand Challenge.Damian Dovarganes/AP Of course, there was also a lot of hype, which tended to overshadow the race’s militaristic origins—remember, the “D” in DARPA stands for “defense.” Back in 2000, a defense authorization bill had stipulated that one-third of the U.S. ground combat vehicles be “unmanned” by 2015, and DARPA conceived of the Grand Challenge to spur development of these autonomous vehicles. The U.S. military was still fighting in the Middle East, and DARPA promoters believed self-driving vehicles would help minimize casualties, particularly those caused by improvised explosive devices. 2007 Urban Challenge, in which vehicles navigated a simulated city and suburban environment; the 2012 Robotics Challenge for disaster-response robots; and the 2022 Subterranean Challenge for—you guessed it—robots that could get around underground. Despite the competitions, continued military conflicts, and hefty government contracts, actual advances in autonomous military vehicles and robots did not take off to the extent desired. As of 2023, robotic ground vehicles made up only 3 percent of the global armored-vehicle market. Much of the contemporary reporting on the Grand Challenge predicted that self-driving cars would take us closer to a “Jetsons” future, with a self-driving vehicle to ferry you around. But two decades after Stanley, the rollout of civilian autonomous cars has been confined to specific applications, such as Waymo robotaxis transporting people around San Francisco or the GrubHub Starships struggling to deliver food across my campus at the University of South Carolina. A Tale of Two Stanleys Not long after the 2005 race, Stanley was ready to retire. Recalling his experience testing Minerva at the National Museum of American History, Thrun thought the museum would make a nice home. He loaned it to the museum in 2006, and since 2008 it has resided permanently in the museum’s collections, alongside other remarkable specimens in robotics and automobiles. In fact, it isn’t even the first Stanley in the collection. Stanley now resides in the collections of the Smithsonian Institution’s National Museum of American History, which also houses another Stanley—this 1910 Stanley Runabout. Behring Center/National Museum of American History/Smithsonian Institution That distinction belongs to a 1910 Stanley Runabout, an early steam-powered car introduced at a time when it wasn’t yet clear that the internal-combustion engine was the way to go. Despite clear drawbacks—steam engines had a nasty tendency to explode—“Stanley steamers” were known for their fine craftsmanship. Fred Marriott set the land speed record while driving a Stanley in 1906. It clocked in at 205.5 kilometers per hour, which was significantly faster than the 21st-century Stanley’s average speed of 30.7 km/hr. To be fair, Marriott’s Stanley was racing over a flat, straight course rather than the off-road terrain navigated by Thrun’s Stanley. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the February 2025 print issue as “Slow and Steady Wins the Race.” References Sebastian Thrun and his colleagues at the Stanford Artificial Intelligence Laboratory, along with members of the other groups that sponsored Stanley, published “Stanley: The Robot That Won the DARPA Grand Challenge.” This paper, from the Journal of Field Robotics, explains the vehicle’s development. The NOVA PBS episode “The Great Robot Race” provides interviews and video footage from both the failed first Grand Challenge and the successful second one. I personally liked the side story of GhostRider, an autonomous motorcycle that competed in both competitions but didn’t quite cut it. (GhostRider also now resides in the Smithsonian’s collection.) Smithsonian curator Carlene Stephens kindly talked with me about how she collected Stanley for the National Museum of American History and where she sees artifacts like this fitting into the stream of history.

8 hours ago 2 votes
Video Friday: Aibo Foster Parents

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES Enjoy today’s videos! This video about ‘foster’ Aibos helping kids at a children’s hospital is well worth turning on auto-translated subtitles for. [ Aibo Foster Program ] Hello everyone, let me introduce myself again. I am Unitree H1 “Fuxi”. I am now a comedian at the Spring Festival Gala, hoping to bring joy to everyone. Let’s push boundaries every day and shape the future together. [ Unitree ] Happy Chinese New Year from PNDbotics! [ PNDbotics ] In celebration of the upcoming Year of the Snake, TRON 1 swishes into three little lions, eager to spread hope, courage, and strength to everyone in 2025. Wishing you a Happy Chinese New Year and all the best, TRON TRON TRON! [ LimX Dynamics ] Designing planners and controllers for contact-rich manipulation is extremely challenging as contact violates the smoothness conditions that many gradient-based controller synthesis tools assume. We introduce natural baselines for leveraging contact smoothing to compute (a) open-loop plans robust to uncertain conditions and/or dynamics, and (b) feedback gains to stabilize around open-loop plans. Mr. Bucket is my favorite. [ Mitsubishi Electric Research Laboratories ] Thanks, Yuki! What do you get when you put three aliens in a robotaxi? The first-ever Zoox commercial! We hope you have as much fun watching it as we had creating it and can’t wait for you to experience your first ride in the not-too-distant future. [ Zoox ] The Humanoids Summit at the Computer History Museum in December was successful enough (either because of or in spite of my active participation) that it’s not only happening again in 2025, there’s also going to be a spring version of the conference in London in May! [ Humanoids Summit ] I’m not sure it’ll ever be practical at scale, but I do really like JSK’s musculoskeletal humanoid work. [ Paper ] In November 2024, as part of the CRS-31 mission, flight controllers remotely maneuvered Canadarm2 and Dextre to extract a payload from the SpaceX Dragon cargo ship’s trunk (CRS-31) and install it on the International Space Station. This animation was developed in preparation for the operation and shows just how complex robotic tasks can be. [ Canadian Space Agency ] Staci Americas, a third-party logistics provider, addressed its inventory challenges by implementing the Corvus One™ Autonomous Inventory Management System in its Georgia and New Jersey facilities. The system uses autonomous drones for nightly, lights-out inventory scans, identifying discrepancies and improving workflow efficiency. [ Corvus Robotics ] Thanks, Joan! I would have said that this controller was too small to be manipulated with a pinch grasp. I would be wrong. [ Pollen ] How does NASA plan to use resources on the surface of the Moon? One method is the ISRU Pilot Excavator, or IPEx! Designed by Kennedy Space Center’s Swamp Works team, the primary goal of IPEx is to dig up lunar soil, known as regolith, and transport it across the Moon’s surface. [ NASA ] The TBS Mojito is an advanced forward-swept FPV flying wing platform that delivers unmatched efficiency and flight endurance. By focusing relentlessly on minimizing drag, the wing reaches speeds upwards of 200 km/h (125 mph), while cruising at 90-120 km/h (60-75 mph) with minimal power consumption. [ Team BlackSheep ] At Zoox, safety is more than a priority—it’s foundational to our mission and one of the core reasons we exist. Our System Design & Mission Assurance (SDMA) team is responsible for building the framework for safe autonomous driving. Our Co-Founder and CTO, Jesse Levinson, and Senior Director of System Design and Mission Assurance (SDMA), Qi Hommes, hosted a LinkedIn Live to provide an insider’s overview of the teams responsible for developing the metrics that ensure our technology is safe for deployment on public roads. [ Zoox ]

yesterday 2 votes
AIs and Robots Should Sound Robotic

AI-generated voices that can mimic every vocal nuance and tic of human speech, down to specific regional accents. And with just a few seconds of audio, AI can now clone someone’s specific voice. AI agents will make calls on our behalf, conversing with others in natural language. All of that is happening, and will be commonplace soon. You can’t just label AI-generated speech. It will come in many different forms. So we need a way to recognize AI that works no matter the modality. It needs to work for long or short snippets of audio, even just a second long. It needs to work for any language, and in any cultural context. At the same time, we shouldn’t constrain the underlying system’s sophistication or language complexity. We have a simple proposal: all talking AIs and robots should use a ring modulator. In the mid-twentieth century, before it was easy to create actual robotic-sounding speech synthetically, ring modulators were used to make actors’ voices sound robotic. Over the last few decades, we have become accustomed to robotic voices, simply because text-to-speech systems were good enough to produce intelligible speech that was not human-like in its sound. Now we can use that same technology to make robotic speech that is indistinguishable from human sound robotic again. Responsible AI companies that provide voice synthesis or AI voice assistants in any form should add a ring modulator of some standard frequency (say, between 30-80 Hz) and of a minimum amplitude (say, 20 percent). That’s it. People will catch on quickly. Here are a couple of examples you can listen to for examples of what we’re suggesting. The first clip is an AI-generated “podcast” of this article made by Google’s NotebookLM featuring two AI “hosts.” Google’s NotebookLM created the podcast script and audio given only the text of this article. The next two clips feature that same podcast with the AIs’ voices modulated more and less subtly by a ring modulator: Raw audio sample generated by Google’s NotebookLM Audio sample with added ring modulator (30 Hz-25%) Audio sample with added ring modulator (30 Hz-40%) We were able to generate the audio effect with a 50-line Python script generated by Anthropic’s Claude. One of the most well-known robot voices were those of the Daleks from Doctor Who in the 1960s. Back then robot voices were difficult to synthesize, so the audio was actually an actor’s voice run through a ring modulator. It was set to around 30 Hz, as we did in our example, with different modulation depth (amplitude) depending on how strong the robotic effect is meant to be. Our expectation is that the AI industry will test and converge on a good balance of such parameters and settings, and will use better tools than a 50-line Python script, but this highlights how simple it is to achieve. We don’t expect scammers to follow our proposal: They’ll find a way no matter what. But that’s always true of security standards, and a rising tide lifts all boats. We think the bulk of the uses will be with popular voice APIs from major companies--and everyone should know that they’re talking with a robot.

2 days ago 6 votes
Sony Kills Recordable Blu-Ray And Other Vintage Media

Physical media fans need not panic yet—you’ll still be able to buy new Blu-Ray movies for your collection. But for those who like to save copies of their own data onto the discs, the remaining options just became more limited: Sony announced last week that it’s ending all production of several recordable media formats—including Blu-Ray discs, MiniDiscs, and MiniDV cassettes—with no successor models. “Considering the market environment and future growth potential of the market, we have decided to discontinue production,” a representative of Sony said in a brief statement to IEEE Spectrum. Though availability is dwindling, most Blu-Ray discs are unaffected. The discs being discontinued are currently only available to consumers in Japan and some professional markets elsewhere, according to Sony. Many consumers in Japan use blank Blu-Ray discs to save TV programs, Sony separately told Gizmodo. Sony, which prototyped the first Blu-Ray discs in 2000, has been selling commercial Blu-Ray products since 2006. Development of Blu-Ray was started by Philips and Sony in 1995, shortly after Toshiba’s DVD was crowned the winner of the battle to replace the VCR, notes engineer Kees Immink, whose coding was instrumental in developing optical formats such as CDs, DVDs, and Blu-Ray discs. “Philips [and] Sony were so frustrated by that loss that they started a new disc format, using a blue laser,” Immink says. Blu-Ray’s Short-Lived Media Dominance The development took longer than expected, but when it was finally introduced a decade later, Blu-Ray was on its way to becoming the medium for distributing video, as DVD discs and VHS tapes had done in their heydays. In 2008, Spectrum covered the moment when Blu-Ray’s major competitor, HD-DVD, surrendered. But the timing was unfortunate, as the rise of streaming made it an empty victory. Still, Blu-Rays continue to have value as collector’s items for many film buffs who want high-quality recordings not subject to compression artifacts that can arise with streaming, not to mention those wary of losing access to movies due to the vagaries of streaming services’ licensing deals. Sony’s recent announcement does, however, cement the death of the MiniDV cassette and MiniDisc. MiniDV, magnetic cassettes meant to replace VHS tapes at one-fifth the size, were once a popular format of digital video cassettes. The MiniDisc, an erasable magneto-optical disc that can hold up to 80 minutes of digitized audio, still has a small following. The 64-millimeter (2.5-inch) discs, held in a plastic cartridge similar to a floppy disk, were developed in the mid-1980s as a replacement for analog cassette tapes. Sony finally released the product in 1992, and it was popular in Japan into the 2000s. To record data onto optical storage like CDs and Blu-Rays, lasers etch microscopic pits into the surface of the disc to represent ones and zeros. Lasers are also used to record data onto MiniDiscs, but instead of making indentations, they’re used to change the polarization of the material; the lasers heat up one side of the disc, making the material susceptible to a magnetic field, which can then alter the polarity of the heated area. Then in playback, the polarization of reflected light translates to a one or zero. When the technology behind media storage formats like the MiniDisc and Blu-Ray was first being developed, the engineers involved believed the technology would be used well into the future, says optics engineer Joseph Braat. His research at Philips with Immink served as the basis of the MiniDisc. Despite that optimism, “the density of information in optical storage was limited from the very beginning,” Braat says. Despite using the compact wavelengths of blue light, Blu-Ray soon hit a limit of how much data could be stored. Even dual-layer Blu-Ray discs can only hold 50 gigabytes per side; that amount of data will give you 50 hours of standard definition streaming on Netflix, or about seven hours of 4K video content. MiniDiscs still have a small, dedicated niche of enthusiasts, with active social media communities and in-person disc swaps. But since Sony stopped production of MiniDisc devices in 2013, the retro format has effectively been on technological hospice care, with the company only offering blank discs and repair services. Now, it seems, it’s officially over.

2 days ago 3 votes
Just How Many Robots Can One Person Control at Once?

This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Swarms of autonomous robots are increasingly being tested and deployed in complex missions, yet a certain level of human oversight during these missions is still required. Which means a major question remains: How many robots—and how complex a mission—can a single human manage before becoming overwhelmed? In a study funded by the U.S. Defense Advanced Research Projects Agency (DARPA), experts show that humans can single-handedly and effectively manage a heterogenous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods of time during an overall small portion of the mission. For instance, in a particularly challenging, multi-day experiment in an urban setting, human controllers were overloaded with stress and workload only three percent of the time. The results were published 19 November in IEEE Transactions on Field Robotics. Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and other complex systems, such as aircraft cockpits and nuclear power plant control rooms, for 35 years. She notes that robot swarms can be used to support missions where work may be particularly dangerous and hazardous for humans, such as monitoring wildfires. “Swarms can be used to provide persistent coverage of an area, such as monitoring for new fires or looters in the recently burned areas of Los Angeles,” Adams says. “The information can be used to direct limited assets, such as firefighting units or water tankers to new fires and hotspots, or to locations at which fires were thought to have been extinguished.” These kinds of missions can involve a mix of many different kinds of unmanned ground vehicles (such as the Aion Robotics R1 wheeled robot) and aerial autonomous vehicles (like the Modal AI VOXL M500 quadcopter), and a human controller may need to reassign individual robots to different tasks as the mission unfolds. Notably, some theories over the past few decades—and even Adams’ early thesis work—suggest that a single human has limited capacity to deploy very large numbers of robots. “These historical theories and the associated empirical results showed that as the number of ground robots increased, so did the human’s workload, which often resulted in reduced overall performance,” says Adams, noting that, although earlier research focused on unmanned ground vehicles (UGVs), which must deal with curbs and other physical barriers, unmanned aerial vehicles (UAVs) often encounter fewer physical barriers. Human controllers managed their swarms of autonomous vehicles with a virtual display. The fuschia ring represents the area the person could see within their head-mounted display.DARPA As part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, Adams and her colleagues sought to explore whether these theories applied to very complex missions involving a mix of unmanned ground and air vehicles. In November 2021, at Fort Campbell in Kentucky, two human controllers took turns engaging in a series of missions over the course of three weeks with the objective of neutralizing an adversarial target. Both human controllers had significant experience controlling swarms, and participated in alternating shifts that ranged from 1.5 to 3 hours per day. Testing How Big of a Swarm Humans Can Manage During the tests, the human controllers were positioned in a designated area on the edge of the testing site, and used a virtual reconstruction of the environment to keep tabs on where vehicles were and what tasks they were assigned to. The largest mission shift involved 110 drones, 30 ground vehicles, and up to 50 virtual vehicles representing additional real-world vehicles. The robots had to navigate through the physical urban environment, as well as a series of virtual hazards represented using AprilTags—simplified QR codes that could represent imaginary hazards—that were scattered throughout the mission site. DARPA made the final field exercise exceptionally challenging by providing thousands of hazards and pieces of information to inform the search. “The complexity of the hazards was significant,” Adams says, noting that some hazards required multiple robots to interact with them simultaneously, and some hazards moved around the environment. Throughout each mission shift, the human controller’s physiological responses to the tasks at hand were monitored. For example, sensors collected data on their heart-rate variability, posture, and even their speech rate. The data were input into an established algorithm that estimates workload levels and was used to determine when the controller was reaching a workload level that exceeded a normal range, called an “overload state.” Adams notes that, despite the complexity and large volume of robots to manage in this field exercise, the number and duration of overload state instances were relatively short—a handful of minutes during a mission shift. “The total percentage of estimated overload states was 3 percent of all workload estimates across all shifts for which we collected data,” she says. www.youtube.com The most common reason for a human commander to reach an overload state is when they had to generate multiple new tactics or inspect which vehicles in the launch zone were available for deployment. Adams notes that these finding suggest that—counter to past theories—the number of robots may be less influential on human swarm control performance than previously thought. Her team is exploring the other factors that may impact swarm control missions, such as other human limitations, system designs and UAS designs, the results of which will potentially inform US Federal Aviation Administration drone regulations, she says.

6 days ago 10 votes

More in AI

Why does AI slop feel so bad to read?

I don’t like reading obviously AI-generated content on Twitter. There’s a derogatory term for it: AI “slop”, which means something like “AI…

22 hours ago 3 votes
The Starting Line for Self-Driving Cars

IEEE Spectrum reported at the time, it was “the motleyest assortment of vehicles assembled in one place since the filming of Mad Max 2: The Road Warrior.” Not a single entrant made it across the finish line. Some didn’t make it out of the parking lot. So it’s all the more remarkable that in the second DARPA Grand Challenge, just a year and a half later, five vehicles crossed the finish line. Stanley, developed by the Stanford Racing Team, eked out a first-place win to claim the $2 million purse. This modified Volkswagen Touareg [shown at top] completed the 212-kilometer course in 6 hours, 54 minutes. Carnegie Mellon’s Sandstorm and H1ghlander took second and third place, respectively, with times of 7:05 and 7:14. So how did the Grand Challenge go from a total bust to having five robust finishers in such a short period of time? It’s definitely a testament to what can be accomplished when engineers rise to a challenge. But the outcome of this one race was preceded by a much longer path of research, and that plus a little bit of luck are what ultimately led to victory. Before Stanley, there was Minerva Let’s back up to 1998, when computer scientist Sebastian Thrun was working at Carnegie Mellon and experimenting with a very different robot: a museum tour guide. For two weeks in the summer, Minerva, which looked a bit like a Dalek from “Doctor Who,” navigated an exhibit at the Smithsonian National Museum of American History. Its main task was to roll around and dispense nuggets of information about the displays. Minerva was a museum tour-guide robot developed by Sebastian Thrun. In an interview at the time, Thrun acknowledged that Minerva was there to entertain. But Minerva wasn’t just a people pleaser ; it was also a machine learning experiment. It had to learn where it could safely maneuver without taking out a visitor or a priceless artifact. Visitor, nonvisitor; display case, not-display case; open floor, not-open floor. It had to react to humans crossing in front of it in unpredictable ways. It had to learn to “see.” Fast-forward five years: Thrun transferred to Stanford in July 2003. Inspired by the first Grand Challenge, he organized the Stanford Racing Team with the aim of fielding a robotic car in the second competition. team’s paper.) A remote-control kill switch, which DARPA required on all vehicles, would deactivate the car before it could become a danger. About 100,000 lines of code did that and much more. Many of the other 2004 competitors regrouped to try again, and new ones entered the fray. In all, 195 teams applied to compete in the 2005 event. Teams included students, academics, industry experts, and hobbyists. In the early hours of 8 October, the finalists gathered for the big race. Each team had a staggered start time to help avoid congestion along the route. About two hours before a team’s start, DARPA gave them a CD containing approximately 3,000 GPS coordinates representing the course. Once the team hit go, it was hands off: The car had to drive itself without any human intervention. PBS’s NOVA produced an excellent episode on the 2004 and 2005 Grand Challenges that I highly recommend if you want to get a feel for the excitement, anticipation, disappointment, and triumph. In the 2005 Grand Challenge, Carnegie Mellon University’s H1ghlander was one of five autonomous cars to finish the race.Damian Dovarganes/AP H1ghlander held the pole position, having placed first in the qualifying rounds, followed by Stanley and Sandstorm. H1ghlander pulled ahead early and soon had a substantial lead. That’s where luck, or rather the lack of it, came in. What went wrong with H1ghlander remained a mystery, even after extensive postrace analysis. It wasn’t until 12 years after the race—and once again with a bit of luck—that CMU discovered the problem: Pressing on a small electronic filter between the engine control module and the fuel injector caused the engine to lose power and even turn off. Team members speculated that an accident a few weeks before the competition had damaged the filter. (To learn more about how CMU finally figured this out, see Spectrum Senior Editor Evan Ackerman’s 2017 story.) The Legacy of the DARPA Grand Challenge Regardless of who won the Grand Challenge, many success stories came out of the contest. A year and a half after the race, Thrun had already made great progress on adaptive cruise control and lane-keeping assistance, which is now readily available on many commercial vehicles. He then worked on Google’s Street View and its initial self-driving cars. CMU’s Red Team worked with NASA to develop rovers for potentially exploring the moon or distant planets. Closer to home, they helped develop self-propelled harvesters for the agricultural sector. Stanford team leader Sebastian Thrun holds a $2 million check, the prize for winning the 2005 Grand Challenge.Damian Dovarganes/AP Of course, there was also a lot of hype, which tended to overshadow the race’s militaristic origins—remember, the “D” in DARPA stands for “defense.” Back in 2000, a defense authorization bill had stipulated that one-third of the U.S. ground combat vehicles be “unmanned” by 2015, and DARPA conceived of the Grand Challenge to spur development of these autonomous vehicles. The U.S. military was still fighting in the Middle East, and DARPA promoters believed self-driving vehicles would help minimize casualties, particularly those caused by improvised explosive devices. 2007 Urban Challenge, in which vehicles navigated a simulated city and suburban environment; the 2012 Robotics Challenge for disaster-response robots; and the 2022 Subterranean Challenge for—you guessed it—robots that could get around underground. Despite the competitions, continued military conflicts, and hefty government contracts, actual advances in autonomous military vehicles and robots did not take off to the extent desired. As of 2023, robotic ground vehicles made up only 3 percent of the global armored-vehicle market. Much of the contemporary reporting on the Grand Challenge predicted that self-driving cars would take us closer to a “Jetsons” future, with a self-driving vehicle to ferry you around. But two decades after Stanley, the rollout of civilian autonomous cars has been confined to specific applications, such as Waymo robotaxis transporting people around San Francisco or the GrubHub Starships struggling to deliver food across my campus at the University of South Carolina. A Tale of Two Stanleys Not long after the 2005 race, Stanley was ready to retire. Recalling his experience testing Minerva at the National Museum of American History, Thrun thought the museum would make a nice home. He loaned it to the museum in 2006, and since 2008 it has resided permanently in the museum’s collections, alongside other remarkable specimens in robotics and automobiles. In fact, it isn’t even the first Stanley in the collection. Stanley now resides in the collections of the Smithsonian Institution’s National Museum of American History, which also houses another Stanley—this 1910 Stanley Runabout. Behring Center/National Museum of American History/Smithsonian Institution That distinction belongs to a 1910 Stanley Runabout, an early steam-powered car introduced at a time when it wasn’t yet clear that the internal-combustion engine was the way to go. Despite clear drawbacks—steam engines had a nasty tendency to explode—“Stanley steamers” were known for their fine craftsmanship. Fred Marriott set the land speed record while driving a Stanley in 1906. It clocked in at 205.5 kilometers per hour, which was significantly faster than the 21st-century Stanley’s average speed of 30.7 km/hr. To be fair, Marriott’s Stanley was racing over a flat, straight course rather than the off-road terrain navigated by Thrun’s Stanley. Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology. An abridged version of this article appears in the February 2025 print issue as “Slow and Steady Wins the Race.” References Sebastian Thrun and his colleagues at the Stanford Artificial Intelligence Laboratory, along with members of the other groups that sponsored Stanley, published “Stanley: The Robot That Won the DARPA Grand Challenge.” This paper, from the Journal of Field Robotics, explains the vehicle’s development. The NOVA PBS episode “The Great Robot Race” provides interviews and video footage from both the failed first Grand Challenge and the successful second one. I personally liked the side story of GhostRider, an autonomous motorcycle that competed in both competitions but didn’t quite cut it. (GhostRider also now resides in the Smithsonian’s collection.) Smithsonian curator Carlene Stephens kindly talked with me about how she collected Stanley for the National Museum of American History and where she sees artifacts like this fitting into the stream of history.

8 hours ago 2 votes
Video Friday: Aibo Foster Parents

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA London Humanoids Summit: 29–30 May 2025, LONDON IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN 2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TX RSS 2025: 21–25 June 2025, LOS ANGELES Enjoy today’s videos! This video about ‘foster’ Aibos helping kids at a children’s hospital is well worth turning on auto-translated subtitles for. [ Aibo Foster Program ] Hello everyone, let me introduce myself again. I am Unitree H1 “Fuxi”. I am now a comedian at the Spring Festival Gala, hoping to bring joy to everyone. Let’s push boundaries every day and shape the future together. [ Unitree ] Happy Chinese New Year from PNDbotics! [ PNDbotics ] In celebration of the upcoming Year of the Snake, TRON 1 swishes into three little lions, eager to spread hope, courage, and strength to everyone in 2025. Wishing you a Happy Chinese New Year and all the best, TRON TRON TRON! [ LimX Dynamics ] Designing planners and controllers for contact-rich manipulation is extremely challenging as contact violates the smoothness conditions that many gradient-based controller synthesis tools assume. We introduce natural baselines for leveraging contact smoothing to compute (a) open-loop plans robust to uncertain conditions and/or dynamics, and (b) feedback gains to stabilize around open-loop plans. Mr. Bucket is my favorite. [ Mitsubishi Electric Research Laboratories ] Thanks, Yuki! What do you get when you put three aliens in a robotaxi? The first-ever Zoox commercial! We hope you have as much fun watching it as we had creating it and can’t wait for you to experience your first ride in the not-too-distant future. [ Zoox ] The Humanoids Summit at the Computer History Museum in December was successful enough (either because of or in spite of my active participation) that it’s not only happening again in 2025, there’s also going to be a spring version of the conference in London in May! [ Humanoids Summit ] I’m not sure it’ll ever be practical at scale, but I do really like JSK’s musculoskeletal humanoid work. [ Paper ] In November 2024, as part of the CRS-31 mission, flight controllers remotely maneuvered Canadarm2 and Dextre to extract a payload from the SpaceX Dragon cargo ship’s trunk (CRS-31) and install it on the International Space Station. This animation was developed in preparation for the operation and shows just how complex robotic tasks can be. [ Canadian Space Agency ] Staci Americas, a third-party logistics provider, addressed its inventory challenges by implementing the Corvus One™ Autonomous Inventory Management System in its Georgia and New Jersey facilities. The system uses autonomous drones for nightly, lights-out inventory scans, identifying discrepancies and improving workflow efficiency. [ Corvus Robotics ] Thanks, Joan! I would have said that this controller was too small to be manipulated with a pinch grasp. I would be wrong. [ Pollen ] How does NASA plan to use resources on the surface of the Moon? One method is the ISRU Pilot Excavator, or IPEx! Designed by Kennedy Space Center’s Swamp Works team, the primary goal of IPEx is to dig up lunar soil, known as regolith, and transport it across the Moon’s surface. [ NASA ] The TBS Mojito is an advanced forward-swept FPV flying wing platform that delivers unmatched efficiency and flight endurance. By focusing relentlessly on minimizing drag, the wing reaches speeds upwards of 200 km/h (125 mph), while cruising at 90-120 km/h (60-75 mph) with minimal power consumption. [ Team BlackSheep ] At Zoox, safety is more than a priority—it’s foundational to our mission and one of the core reasons we exist. Our System Design & Mission Assurance (SDMA) team is responsible for building the framework for safe autonomous driving. Our Co-Founder and CTO, Jesse Levinson, and Senior Director of System Design and Mission Assurance (SDMA), Qi Hommes, hosted a LinkedIn Live to provide an insider’s overview of the teams responsible for developing the metrics that ensure our technology is safe for deployment on public roads. [ Zoox ]

yesterday 2 votes
AI Roundup 103: The DeepSeek edition

January 31, 2025.

yesterday 3 votes