More from IEEE Spectrum
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Swarms of autonomous robots are increasingly being tested and deployed in complex missions, yet a certain level of human oversight during these missions is still required. Which means a major question remains: How many robots—and how complex a mission—can a single human manage before becoming overwhelmed? In a study funded by the U.S. Defense Advanced Research Projects Agency (DARPA), experts show that humans can single-handedly and effectively manage a heterogenous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods of time during an overall small portion of the mission. For instance, in a particularly challenging, multi-day experiment in an urban setting, human controllers were overloaded with stress and workload only three percent of the time. The results were published 19 November in IEEE Transactions on Field Robotics. Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and other complex systems, such as aircraft cockpits and nuclear power plant control rooms, for 35 years. She notes that robot swarms can be used to support missions where work may be particularly dangerous and hazardous for humans, such as monitoring wildfires. “Swarms can be used to provide persistent coverage of an area, such as monitoring for new fires or looters in the recently burned areas of Los Angeles,” Adams says. “The information can be used to direct limited assets, such as firefighting units or water tankers to new fires and hotspots, or to locations at which fires were thought to have been extinguished.” These kinds of missions can involve a mix of many different kinds of unmanned ground vehicles (such as the Aion Robotics R1 wheeled robot) and aerial autonomous vehicles (like the Modal AI VOXL M500 quadcopter), and a human controller may need to reassign individual robots to different tasks as the mission unfolds. Notably, some theories over the past few decades—and even Adams’ early thesis work—suggest that a single human has limited capacity to deploy very large numbers of robots. “These historical theories and the associated empirical results showed that as the number of ground robots increased, so did the human’s workload, which often resulted in reduced overall performance,” says Adams, noting that, although earlier research focused on unmanned ground vehicles (UGVs), which must deal with curbs and other physical barriers, unmanned aerial vehicles (UAVs) often encounter fewer physical barriers. Human controllers managed their swarms of autonomous vehicles with a virtual display. The fuschia ring represents the area the person could see within their head-mounted display.DARPA As part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, Adams and her colleagues sought to explore whether these theories applied to very complex missions involving a mix of unmanned ground and air vehicles. In November 2021, at Fort Campbell in Kentucky, two human controllers took turns engaging in a series of missions over the course of three weeks with the objective of neutralizing an adversarial target. Both human controllers had significant experience controlling swarms, and participated in alternating shifts that ranged from 1.5 to 3 hours per day. Testing How Big of a Swarm Humans Can Manage During the tests, the human controllers were positioned in a designated area on the edge of the testing site, and used a virtual reconstruction of the environment to keep tabs on where vehicles were and what tasks they were assigned to. The largest mission shift involved 110 drones, 30 ground vehicles, and up to 50 virtual vehicles representing additional real-world vehicles. The robots had to navigate through the physical urban environment, as well as a series of virtual hazards represented using AprilTags—simplified QR codes that could represent imaginary hazards—that were scattered throughout the mission site. DARPA made the final field exercise exceptionally challenging by providing thousands of hazards and pieces of information to inform the search. “The complexity of the hazards was significant,” Adams says, noting that some hazards required multiple robots to interact with them simultaneously, and some hazards moved around the environment. Throughout each mission shift, the human controller’s physiological responses to the tasks at hand were monitored. For example, sensors collected data on their heart-rate variability, posture, and even their speech rate. The data were input into an established algorithm that estimates workload levels and was used to determine when the controller was reaching a workload level that exceeded a normal range, called an “overload state.” Adams notes that, despite the complexity and large volume of robots to manage in this field exercise, the number and duration of overload state instances were relatively short—a handful of minutes during a mission shift. “The total percentage of estimated overload states was 3 percent of all workload estimates across all shifts for which we collected data,” she says. www.youtube.com The most common reason for a human commander to reach an overload state is when they had to generate multiple new tactics or inspect which vehicles in the launch zone were available for deployment. Adams notes that these finding suggest that—counter to past theories—the number of robots may be less influential on human swarm control performance than previously thought. Her team is exploring the other factors that may impact swarm control missions, such as other human limitations, system designs and UAS designs, the results of which will potentially inform US Federal Aviation Administration drone regulations, she says.
In the early 1970s, the Cold War had reached a particularly frigid moment, and U.S. military and intelligence officials had a problem. The Soviet Navy was becoming a global maritime threat—and the United States did not have a global ocean-surveillance capability. Adding to the alarm was the emergence of a new Kirov class of nuclear-powered guided-missile battle cruisers, the largest Soviet vessels yet. For the United States, this situation meant that the perilous equilibrium of mutual assured destruction, MAD, which so far had dissuaded either side from launching a nuclear strike, could tilt in the wrong direction. satellite program called Parcae to help keep the Cold War from suddenly toggling to hot. The engineers working on Parcae would have to build the most capable orbiting electronic intelligence system ever. A Parcae satellite was just a few meters long but it had four solar panels that extended several meters out from the body of the satellite. The rod emerging from the satellite was a gravity boom, which kept the orbiter’s signal antennas oriented toward Earth.NRO Dwayne Day, a historian of space technology for the National Academy of Sciences, the United States conducted large naval exercises in 1971, with U.S. ships broadcasting signals, and several types of ELINT satellites attempting to detect them. The tests revealed worrisome weaknesses in the country’s intelligence-gathering satellite systems. One of the big advances of the Parcae program was a three-satellite dispenser that could loft three satellites, which then functioned together in orbit as a group. Seen here are three Parcae satellites on the dispenser.Arthur Collier Even the mere existence of the satellites, which would be built by a band of veteran engineers at the U.S. Naval Research Laboratory (NRL) in Washington, D.C., would remain officially secret until July 2023. That’s when the National Reconnaissance Office declassified a one-page acknowledgment about Parcae. Since its establishment in 1961, the NRO has directed and overseen the nation’s spy-satellite programs, including ones for photoreconnaissance, communications interception, signals intelligence, and radar. With this scant declassification, the Parcae program could at least be celebrated by name and its overall mission revealed during the NRL’s centennial celebration that year. Aspects of the Parcae program had been unofficially outed over the years by a few enterprising journalists in such venues as Aviation Week & Space Technology and The Space Review, by historians like Day, and even by a Russian military advisor in a Ministry of Defense journal. This article is based on these sources, along with additional interviews and written input from Navy engineers who designed, built, operated, and managed Parcae and its precursor satellite systems. They confirm a commonly held but nevertheless profound understanding about the United States during that era. Simply put, there was nothing quite like the paranoia and high stakes of the Cold War to spur engineers into creative frenzies that rapidly produced brilliant national-security technologies, including surveillance systems like Parcae. A Spy Satellite with a Cosmic Cover Name Although the NRO authorized and paid for Parcae, the responsibility to actually design and build it fell to the cold-warrior engineers at NRL and their contractor-partners at such places as Systems Engineering Laboratories and HRB Singer, a signal-analysis and -processing firm in State College, Pa. Galactic Radiation and Background experiment, which was a cover name for the satellite’s secret payload; it also had a bona fide solar-science payload housed in the same shell [see sidebar, “From Quartz-Crystal Detectors to Eavesdropping Satellites”]. On 22 June 1960, GRAB made it into orbit to become the world’s first spy satellite, though there was no opportunity to brag about it. The existence of GRAB’s classified mission was an official secret until 1998. launched in 1961, and the pair of satellites monitored Soviet radar systems for the National Security Agency and the Strategic Air Command. The NSA, headquartered at Fort Meade, Md., is responsible for many aspects of U.S. signals intelligence, notably intercepting and decrypting sensitive communications all over the world and devising machines and algorithms that protect U.S. official communications. The SAC was until 1992 in charge of the country’s strategic bombers and intercontinental ballistic missiles. The Poppy Block II satellites, which had a diameter of 61 centimeters, were outfitted with antennas to pick up signals from Soviet radars [top]. The signals were recorded and retransmitted to ground stations, such as this receiving console photographed in 1965, designated A-GR-2800. NRO The GRAB satellites tracked several thousand Soviet air-defense radars scattered across the vast Russian continent, picking up the radars’ pulses and transmitting them to ground stations in friendly countries around the world. It could take months to eke out useful intelligence from the data, which was hand-delivered to NSA and SAC. There, analysts would examine the data for “signals of interest,” like the proverbial needle in a haystack, interpret their significance, and package the results into reports. All this took days if not weeks, so GRAB data was mostly relevant for overall situational awareness and longer-term strategic planning. declassified in 2004. With multiple satellites in orbit, Poppy could geolocate emission sources, at least roughly. Poppy program, the NRL satellite team showed it was even possible, in principle, to get this information to end users within hours or even less by relaying it directly to ground stations, rather than recording the data first. These first instances of rapidly delivered intelligence fired the imaginations, and expectations, of U.S. national-security leaders and offered a glimpse of the ocean-surveillance capabilities they wanted Parcae to provide. How Parcae Inspired Modern Satellite Signals Intelligence The first of the 12 Parcae missions launched in 1976 and the last, 20 years later. Over its long lifetime, the program had other cryptic cover names, among them White Cloud and Classic Wizard. According to NRO’s declassification memo, it stopped using the Parcae satellites in May 2008. Originally designed as an intercontinental ballistic missile (ICBM), the Atlas F was later repurposed to launch satellites, including Parcae. Peter Hunter Photo Collections Atlas F rocket to deliver three satellites in precise orbital formations, which were essential for their geolocation and tracking functions. (Later launches used the larger Titan IV-A rocket.) This triple launching capability was achieved with a satellite dispenser designed and built by an NRL team led by Peter Wilhelm. As chief engineer for NRL’s satellite-building efforts for some 60 years until his retirement in 2015, Wilhelm directed the development of more than 100 satellites, some of them still classified.. The satellites generally worked in clusters of three (the name Parcae comes from the three fates of Roman mythology), each detecting the radar and radio emissions from Soviet ships. To pinpoint a ship, the satellites were equipped with highly precise, synchronized clocks. Tiny differences in the time when each satellite received the radar signals emitted from the ship were then used to triangulate the ship’s location. The calculated location was updated each time the satellites passed over. A GRAB satellite was prepared for launch in 1960. Peter Wilhelm is standing, at right, in a patterned shirt.NRO Transmissions from the GRAB satellites were received in “huts” [left], likely in a country just outside Soviet borders. In between the two banks of receivers in this photo is the wheel used for manually steering the antennas. These yagi antennas [right] were linearly polarized.NRO Naval Security Group Command, which performed encryption and data-security functions for the Navy. The data was then relayed via communications satellites to Naval facilities worldwide, where it was correlated and turned into intelligence. That intelligence, in the form of Ships Emitter Locating Reports, went out to watch officers and commanders aboard ships at sea and other users. A report might include information about, for example, a newly detected radar signal—the type of radar, its frequencies, pulse, scan rates, and location. Early Minicomputers Spotted Signals of Interest To scour the otherwise overwhelming torrents of raw ELINT data for signals of interest, the Parcae program included an intelligence-analysis data-processing system built around then-high-end computers. These were likely produced by Systems Engineering Laboratories, in Fort Lauderdale, Fla. SEL had produced the SEL-810 and SEL-86 minicomputers used in the Poppy program. These machines included a “real-time interrupt capability,” which enabled the computers to halt data processing to accept and store new data and then resume the processing where it had left off. That feature was useful for a system like Parcae, which continually harvested data. Also crucial to ferreting out important signals was the data-processing software, supplied by vendors whose identities remain classified. The SEL-810 minicomputer was the heart of a data-processing system built to scour the torrents of raw data from the Poppy satellites for signals of interest. Computer History Museum Over time, the Ships Emitter Locating Reports evolved from crude teletype printouts derived from raw intercept data to more user-friendly forms such as automatically displayed maps. The reports delivered the intelligence, security, or military meaning of the intercepts in formats that naval commanders and other end users on the ground and in the air could grasp quickly and put to use. Parcae Tech and the 2-Minute Warning Harvesting and pinpointing radar signatures, though difficult to pull off, wasn’t even the most sobering tech challenge. Even more daunting was Parcae’s requirement to deliver “sensor-to-shooter” intelligence—from a satellite to a ship commander or weapons control station—within minutes. According to Navy Captain James “Mel” Stephenson, who was the first director of the NRO’s Operational Support Office, achieving this goal required advances all along the technology chain. That included the satellites, computer hardware, data-processing algorithms, communications and encryption protocols, broadcast channels, and end-user terminals. From Quartz-Crystal Detectors to Eavesdropping Satellites The seed technology for the U.S. Navy’s entire ELINT-satellite story goes back to World War II, when the Naval Research Laboratory (NRL) became a leading developer in the then-new business of electronic warfare and countermeasures. Think of monitoring an enemy’s radio-control signals, fooling its electronic reconnaissance probes, and evading its radar-detection system. NRL’s foray into satellite-based signals intelligence emerged from a quartz-crystal-based radio-wave detector designed by NRL engineer Reid Mayo that he sometimes personally installed on the periscopes of U.S. submarines. This device helped commanders save their submarines and the lives of those aboard by specifying when and from what direction enemy radars were probing their vessels. In the late 1950s, as the Space Age was lifting off, Mayo and his boss, Howard Lorenzen (who would later hire Lee M. Hammarstrom), were perhaps the first to realize that the same technology should be able to “see” much larger landscapes of enemy radar activity if the detectors could be placed in orbit. Lorenzen was an influential, larger-than-life technology visionary often known as the father of electronic warfare. In 2008, the United States named a missile-range instrumentation ship, which supports and tracks missile launches, after him. Lorenzen’s and Mayo’s engineering concept of “raising the periscope” for the purpose of ELINT gathering was implemented on the first GRAB satellite. The satellite was a secret payload that piggybacked on a publicly announced scientific payload, Solrad, which collected first-of-its-kind data on the sun’s ultraviolet- and X-ray radiation. That data would prove useful for modeling and predicting the behavior of the planet’s ionosphere, which influenced the far-flung radio communication near and dear to the Navy. Though the United States couldn’t brag about the GRAB mission even as the Soviet Union was scoring first after first in the space race, it was the world’s first successful spy payload in orbit, beating by a few months the first successful launch of Corona, the CIA’s maiden space-based photoreconnaissance program. A key figure in the development of those user terminals was Ed Mashman, an engineer who worked as a contractor on Parcae. The terminals had to be tailored according to where they would be used and who would be using them. One early series was known as Prototype Analysis Display Systems, even though the “prototypes” ended up deployed as operational units. Before these display systems became available, Mashman recalled in an interview for IEEE Spectrum, “Much of the data that had been coming in from Classic Wizard just went into the burn bag, because they could not keep up with the high volume.” The intelligence analysts were still relying on an arduous process to determine if the information in the reports was alarming enough to require some kind of action, such as positioning U.S. naval vessels that were close enough to a Soviet vessel to launch an attack. To make such assessments, the analysts had to screen a huge number of teletype reports coming in from the satellites, manually plotting the data on a map to discern which ones might indicate a high-priority threat from the majority that did not. When the “prototype” display systems became available, Mashman recalls, the analysts could “all of a sudden, see it automatically plotted on a map and get useful information out of it…. When some really important thing came from Classic Wizard, it would [alert] the watch officer and show where it was and what it was.” These capabilities were developed during shoulder-to-shoulder work sessions between end users and engineers like Mashman. Those sessions led to an iterative process by which the ELINT system could deliver and package data in user-friendly ways and with a swiftness that was tactically useful. Parcae’s rapid-dissemination model flourished well beyond the end of the program and is one of Parcae’s most enduring legacies. For example, to rapidly distribute intelligence globally, Parcae’s engineering teams built a secure communications channel based on a complex mix of protocols, data-processing algorithms, and tailored transmission waveforms, among other elements. The communications network connecting these pieces became known as the Tactical Receive Equipment and Related Applications Broadcast. As recently as Operation Desert Storm, it was still being used. “During Desert Storm, we added imagery to the…broadcast, enabling it to reach the forces as soon as it was generated,” says Stephenson. Over the course of a 40-year career in national security technologies, Lee M. Hammarstrom rose to the position of chief scientist of the National Reconnaissance Office. U.S. Naval Research Laboratory According to Hammarstrom, Parcae’s communications challenges had to be solved concurrently with the core challenge of managing and parsing the vast amounts of raw data into useful intelligence. Coping with this data deluge began with the satellites themselves, which some participants came to think of as “orbiting peripherals.” The term reflected the fact that the gathering of raw electronic signals was just the beginning of a complex system of complex systems. Even in the late 1960s, when Parcae’s predecessor Poppy was operational, the NRL team and its contractors had totally reconfigured the satellites, data-collection system, ground stations, computers, and other system elements for the task. Collier notes that in addition to supporting military operations, Parcae “was available to help provide maritime-domain awareness for tracking drug, arms and human trafficking as well as general commercial shipping.”
Seabed observation plays a major role in safeguarding marine systems by keeping tabs on the species and habitats on the ocean floor at different depths. This is primarily done by underwater robots that use optical imaging to collect high quality data that can be fed into environmental models, and compliment the data obtained through sonar in large-scale ocean observations. Different underwater robots have been trialed over the years, but many have struggled with performing near-seabed observations because they disturb the local seabed by destroying coral and disrupting the sediment. Gang Wang, from Harbin Engineering University in China, and his research team have recently developed a maneuverable underwater vehicle that is better suited to seabed operations because it doesn’t disturb the local environment by floating above the seabed and possessing a specially engineering propeller system to manuever. These robots could be used to better protect the seabed while studying it, and improve efforts to preserve marine biodiversity and explore for underwater resources such as minerals for EV batteries. Many underwater robots are wheeled or legged, but “these robots face substantial challenges in rugged terrains where obstacles and slopes can impede their functionality,” says Wang. They can also damage coral reefs. Floating robots don’t have this issue, but existing options disturb the sediment on the seabed because their thrusters create a downward current during ascension. The waves generated as the propeller’s wake directly hit the seafloor in most floating robots, which causes sediment to move in the immediate vicinity. In a similar way to dust blowing in front of your digital or smartphone camera, the particles moving through the water can obscure the view of the cameras on the robot and reduce the quality of the images it captures. “Addressing this issue was crucial for the functional success of our prototype and for increasing its acceptance among engineers,” says Wang. Designing a Better Underwater Robot After further investigation, Wang and the rest of the team found that the robot’s shape influences the local water resistance, or drag, even at low speeds. “During the design process, we configured the robot with two planes exhibiting significant differences in water resistance,” says Wang. This led to the researchers developing a robot with a flattened body and angling the thruster relative to the central axis. “We found that the robot’s shape and the thruster layout significantly influence its ascent speed,” says Wang. Clockwise from left: relationship between rotational speed of the thruster and the resultant force and torque in the airframe coordinate system, overall structure of the robot, side view of the thruster arrangement and main electronics components.Gang Wang, Kaixin Liu et al. The researchers created a navigational system where the thrusters generate a combined force that slants downwards but still allows the robot to ascend, changing the wake distribution during ascent so that it doesn’t disturb the sediment on the seafloor. “Flattening the robot’s body and angling the thruster relative to the central axis is a straightforward approach for most engineers, enhancing the potential for broader application of this design” in seabed monitoring, says Wang. “By addressing the navigational concerns of floating robots, we aim to enhance the observational capabilities of underwater robots in near-seafloor environments,” says Wang. The vehicle was tested in a range of marine environments, including sandy areas, coral reefs, and sheer rock, to show its ability to minimally disturb sediments in multiple potential environments. Alongside the structural design advancements, the team incorporated an angular acceleration feedback control to keep the robot as close to the seafloor as possible without actually hitting it—called bottoming out. They also developed external disturbance observation algorithms and designed a sensor layout structure that enables the robot to quickly recognize and resist external disturbances, as well as plot a path in real time. This approach allowed the new vehicle to travel along at only 20 centimeters above the seafloor without bottoming out. By implanting this control, the robot was able to get close to the sea floor and improve the quality of the images it took by reducing light refraction and scattering caused by the water column. “Given the robot’s proximity to the seafloor, even brief periods of instability can lead to collisions with the bottom, and we have verified that the robot shows excellent resistance to strong disturbances,” says Wang. With the success of this new robot achieving a closer approach to the seafloor without disturbing the seabed or crashing, Wang has stated that they plan to use the robot to closely observe coral reefs. Coral reef monitoring currently relies on inefficient manual methods, so the robots could widen the areas that are observed, and do so more quickly. Wang adds that “effective detection methods are lacking in deeper waters, particularly in the mid-light layer. We plan to improve the autonomy of the detection process to substitute divers in image collection, and facilitate the automatic identification and classification of coral reef species density to provide a more accurate and timely feedback on the health status of coral reefs.”
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion. RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANY German Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANY RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLAND ICUAS 2025: 14–17 May 2025, CHARLOTTE, NC ICRA 2025: 19–23 May 2025, ATLANTA, GA IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN RSS 2025: 21–25 June 2025, LOS ANGELES IAS 2025: 30 June–4 July 2025, GENOA, ITALY ICRES 2025: 3–4 July 2025, PORTO, PORTUGAL IEEE World Haptics: 8–11 July 2025, SUWON, KOREA IFAC Symposium on Robotics: 15–18 July 2025, PARIS RoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL Enjoy today's videos! Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it.] [ Unitree ] This is just lovely. [ Mimus CNK ] There’s a lot to like about Grain Weevil as an effective unitasking robot, but what I really appreciate here is that the control system is just a remote and a camera slapped onto the top of the bin. [ Grain Weevil ] This video, “Robot arm picking your groceries like a real person,” has taught me that I am not a real person. [ Extend Robotics ] A robot walking like a human walking like what humans think a robot walking like a robot walks like. And that was my favorite sentence of the week. [ Engineai ] For us, robots are tools to simplify life. But they should look friendly too, right? That’s why we added motorized antennas to Reachy, so it can show simple emotions—without a full personality. Plus, they match those expressive eyes O_o! [ Pollen Robotics ] So a thing that I have come to understand about ships with sails (thanks, Jack Aubrey!) is that sailing in the direction that the wind is coming from can be tricky. Turns out that having a boat with two fronts and no back makes this a lot easier. [ Paper ] from [ 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics ] via [ IEEE Xplore ] I’m Kento Kawaharazuka from JSK Robotics Laboratory at the University of Tokyo. I’m writing to introduce our human-mimetic binaural hearing system on the musculoskeletal humanoid Musashi. The robot can perform 3D sound source localization using a human-like outer ear structure and an FPGA-based hearing system embedded within it. [ Paper ] Thanks, Kento! The third CYBATHLON took place in Zurich on 25-27 October 2024. The CYBATHLON is a competition for people with impairments using novel robotic technologies to perform activities of daily living. It was invented and initiated by Prof. Robert Riener at ETH Zurich, Switzerland. Races were held in eight disciplines including arm and leg prostheses, exoskeletons, powered wheelchairs, brain computer interfaces, robot assistance, vision assistance, and functional electrical stimulation bikes. [ Cybathlon ] Thanks, Robert! If you’re going to work on robot dogs, I’m honestly not sure whether Purina would be the most or least appropriate place to do that. [ Michigan Robotics ]
More in AI
The market can remain irrational longer than you can remain solvent, but today might be the day.
This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore. Swarms of autonomous robots are increasingly being tested and deployed in complex missions, yet a certain level of human oversight during these missions is still required. Which means a major question remains: How many robots—and how complex a mission—can a single human manage before becoming overwhelmed? In a study funded by the U.S. Defense Advanced Research Projects Agency (DARPA), experts show that humans can single-handedly and effectively manage a heterogenous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods of time during an overall small portion of the mission. For instance, in a particularly challenging, multi-day experiment in an urban setting, human controllers were overloaded with stress and workload only three percent of the time. The results were published 19 November in IEEE Transactions on Field Robotics. Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and other complex systems, such as aircraft cockpits and nuclear power plant control rooms, for 35 years. She notes that robot swarms can be used to support missions where work may be particularly dangerous and hazardous for humans, such as monitoring wildfires. “Swarms can be used to provide persistent coverage of an area, such as monitoring for new fires or looters in the recently burned areas of Los Angeles,” Adams says. “The information can be used to direct limited assets, such as firefighting units or water tankers to new fires and hotspots, or to locations at which fires were thought to have been extinguished.” These kinds of missions can involve a mix of many different kinds of unmanned ground vehicles (such as the Aion Robotics R1 wheeled robot) and aerial autonomous vehicles (like the Modal AI VOXL M500 quadcopter), and a human controller may need to reassign individual robots to different tasks as the mission unfolds. Notably, some theories over the past few decades—and even Adams’ early thesis work—suggest that a single human has limited capacity to deploy very large numbers of robots. “These historical theories and the associated empirical results showed that as the number of ground robots increased, so did the human’s workload, which often resulted in reduced overall performance,” says Adams, noting that, although earlier research focused on unmanned ground vehicles (UGVs), which must deal with curbs and other physical barriers, unmanned aerial vehicles (UAVs) often encounter fewer physical barriers. Human controllers managed their swarms of autonomous vehicles with a virtual display. The fuschia ring represents the area the person could see within their head-mounted display.DARPA As part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, Adams and her colleagues sought to explore whether these theories applied to very complex missions involving a mix of unmanned ground and air vehicles. In November 2021, at Fort Campbell in Kentucky, two human controllers took turns engaging in a series of missions over the course of three weeks with the objective of neutralizing an adversarial target. Both human controllers had significant experience controlling swarms, and participated in alternating shifts that ranged from 1.5 to 3 hours per day. Testing How Big of a Swarm Humans Can Manage During the tests, the human controllers were positioned in a designated area on the edge of the testing site, and used a virtual reconstruction of the environment to keep tabs on where vehicles were and what tasks they were assigned to. The largest mission shift involved 110 drones, 30 ground vehicles, and up to 50 virtual vehicles representing additional real-world vehicles. The robots had to navigate through the physical urban environment, as well as a series of virtual hazards represented using AprilTags—simplified QR codes that could represent imaginary hazards—that were scattered throughout the mission site. DARPA made the final field exercise exceptionally challenging by providing thousands of hazards and pieces of information to inform the search. “The complexity of the hazards was significant,” Adams says, noting that some hazards required multiple robots to interact with them simultaneously, and some hazards moved around the environment. Throughout each mission shift, the human controller’s physiological responses to the tasks at hand were monitored. For example, sensors collected data on their heart-rate variability, posture, and even their speech rate. The data were input into an established algorithm that estimates workload levels and was used to determine when the controller was reaching a workload level that exceeded a normal range, called an “overload state.” Adams notes that, despite the complexity and large volume of robots to manage in this field exercise, the number and duration of overload state instances were relatively short—a handful of minutes during a mission shift. “The total percentage of estimated overload states was 3 percent of all workload estimates across all shifts for which we collected data,” she says. www.youtube.com The most common reason for a human commander to reach an overload state is when they had to generate multiple new tactics or inspect which vehicles in the launch zone were available for deployment. Adams notes that these finding suggest that—counter to past theories—the number of robots may be less influential on human swarm control performance than previously thought. Her team is exploring the other factors that may impact swarm control missions, such as other human limitations, system designs and UAS designs, the results of which will potentially inform US Federal Aviation Administration drone regulations, she says.
Picking your general-purpose AI