More from Artificial Ignorance
Why everyone's focused on DeepSeek's new R1 model - and what it means for AI geopolitics.
A look into the world of AI-generated movie trailer slop.
More in AI
The market can remain irrational longer than you can remain solvent, but today might be the day.
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Swarms of autonomous robots are increasingly being tested and deployed in complex missions, yet a certain level of human oversight during these missions is still required. Which means a major question remains: How many robots—and how complex a mission—can a single human manage before becoming overwhelmed? In a study funded by the U.S. Defense Advanced Research Projects Agency (DARPA), experts show that humans can single-handedly and effectively manage a heterogenous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods of time during an overall small portion of the mission. For instance, in a particularly challenging, multi-day experiment in an urban setting, human controllers were overloaded with stress and workload only three percent of the time. The results were published 19 November in IEEE Transactions on Field Robotics. Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and other complex systems, such as aircraft cockpits and nuclear power plant control rooms, for 35 years. She notes that robot swarms can be used to support missions where work may be particularly dangerous and hazardous for humans, such as monitoring wildfires. “Swarms can be used to provide persistent coverage of an area, such as monitoring for new fires or looters in the recently burned areas of Los Angeles,” Adams says. “The information can be used to direct limited assets, such as firefighting units or water tankers to new fires and hotspots, or to locations at which fires were thought to have been extinguished.” These kinds of missions can involve a mix of many different kinds of unmanned ground vehicles (such as the Aion Robotics R1 wheeled robot) and aerial autonomous vehicles (like the Modal AI VOXL M500 quadcopter), and a human controller may need to reassign individual robots to different tasks as the mission unfolds. Notably, some theories over the past few decades—and even Adams’ early thesis work—suggest that a single human has limited capacity to deploy very large numbers of robots. “These historical theories and the associated empirical results showed that as the number of ground robots increased, so did the human’s workload, which often resulted in reduced overall performance,” says Adams, noting that, although earlier research focused on unmanned ground vehicles (UGVs), which must deal with curbs and other physical barriers, unmanned aerial vehicles (UAVs) often encounter fewer physical barriers. Human controllers managed their swarms of autonomous vehicles with a virtual display. The fuschia ring represents the area the person could see within their head-mounted display.DARPA As part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, Adams and her colleagues sought to explore whether these theories applied to very complex missions involving a mix of unmanned ground and air vehicles. In November 2021, at Fort Campbell in Kentucky, two human controllers took turns engaging in a series of missions over the course of three weeks with the objective of neutralizing an adversarial target. Both human controllers had significant experience controlling swarms, and participated in alternating shifts that ranged from 1.5 to 3 hours per day. Testing How Big of a Swarm Humans Can Manage During the tests, the human controllers were positioned in a designated area on the edge of the testing site, and used a virtual reconstruction of the environment to keep tabs on where vehicles were and what tasks they were assigned to. The largest mission shift involved 110 drones, 30 ground vehicles, and up to 50 virtual vehicles representing additional real-world vehicles. The robots had to navigate through the physical urban environment, as well as a series of virtual hazards represented using AprilTags—simplified QR codes that could represent imaginary hazards—that were scattered throughout the mission site. DARPA made the final field exercise exceptionally challenging by providing thousands of hazards and pieces of information to inform the search. “The complexity of the hazards was significant,” Adams says, noting that some hazards required multiple robots to interact with them simultaneously, and some hazards moved around the environment. Throughout each mission shift, the human controller’s physiological responses to the tasks at hand were monitored. For example, sensors collected data on their heart-rate variability, posture, and even their speech rate. The data were input into an established algorithm that estimates workload levels and was used to determine when the controller was reaching a workload level that exceeded a normal range, called an “overload state.” Adams notes that, despite the complexity and large volume of robots to manage in this field exercise, the number and duration of overload state instances were relatively short—a handful of minutes during a mission shift. “The total percentage of estimated overload states was 3 percent of all workload estimates across all shifts for which we collected data,” she says. www.youtube.com The most common reason for a human commander to reach an overload state is when they had to generate multiple new tactics or inspect which vehicles in the launch zone were available for deployment. Adams notes that these finding suggest that—counter to past theories—the number of robots may be less influential on human swarm control performance than previously thought. Her team is exploring the other factors that may impact swarm control missions, such as other human limitations, system designs and UAS designs, the results of which will potentially inform US Federal Aviation Administration drone regulations, she says.
Picking your general-purpose AI
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today’s videos! Are wheeled quadrupeds going to run out of crazy new ways to move anytime soon? Looks like maybe not. [ DEEP Robotics ] A giant eye and tiny feet make this pipe inspection robot exceptionally cute, I think. [ tmsuk ] via [ Robotstart ] Agility seems to be one of the few humanoid companies talking seriously about safety. [ Agility Robotics ] A brain-computer interface, surgically placed in a research participant with tetraplegia, paralysis in all four limbs, provided an unprecedented level of control over a virtual quadcopter—just by thinking about moving their unresponsive fingers. In this video, you’ll see just how the participant of the study controlled the virtual quadcopter using their brain’s thought signals to move a virtual hand controller. [ University of Michigan ] Hair styling is a crucial aspect of personal grooming, significantly influenced by the appearance of front hair. While brushing is commonly used both to detangle hair and for styling purposes, existing research primarily focuses on robotic systems for detangling hair, with limited exploration into robotic hair styling. This research presents a novel robotic system designed to automatically adjust front hairstyles, with an emphasis on path planning for root-centric strand adjustment. [ Paper ] Thanks, Kento! If I’m understanding this correctly, if you’re careful it’s possible to introduce chaos into a blind juggling robot to switch synced juggling to alternate juggling. [ ETH Zurich ] Drones with beaks? Sure, why not. [ GRVC ] Check out this amazing demo preview video we shot in our offices here at OLogic prior to CES 2025. OLogic built this demo robot for MediaTek to show off all kinds of cool things running on a MediaTek Genio 700 processor. The robot is a Create3 base with a custom tower (similar to a TurtleBot) using a Pumpkin Genio 700 EVK, plus a LIDAR and a Orbbec Gemini 335 camera on it. The robot is running ROS2 NAV and finds colored balls on the floor using an NVIDIA TAO model running on the Genio 700 and adds them to the map so the robot can find them. You can direct the robot through RVIZ to go pick up a ball and move it to wherever you want on the map. [ OLogic ] We explore the potential of multimodal large language models (LLMs) for enabling autonomous trash pickup robots to identify objects characterized as trash in complex, context-dependent scenarios. By constructing evaluation datasets with human agreement annotations, we demonstrate that LLMs excel in visually clear cases with high human consensus, while performance is lower in ambiguous cases, reflecting human uncertainty. To validate real-world applicability, we integrate GPT-4o with an open vocabulary object detector and deploy it on a quadruped with a manipulator arm with ROS 2, showing that it is possible to use this information for autonomous trash pickup in practical settings. [ University of Texas at Austin ]