Human-swarm teaming
Human observers struggle to detect problems in a robot swarm’s behaviour and are poor at identifying most types of problem.
Research team: Elliott Hogg, Wenwen Gao, Chris Bennett, Sophie Hart, Victoria Steane, Seth Bullock and Jan Noyes
When robot swarms go wrong
Swarms of independent but interacting robots have the potential to explore areas effectively and efficiently. In theory, this makes such swarms ideal for locating objects or people.
In practice, though, swarm robots might sometimes misbehave after developing faults or begin to behave maliciously, perhaps following a security breach. It is vital for operational reliability to understand how these undesired behaviours in individual robots manifest within a swarm and whether they can be detected.
Human operators can generally improve the performance of robot swarms by providing occasional interventions. These operators might also offer a first line of defence in detecting undesired robot behaviours.
Our research investigated whether humans can:
- Identify when robots within a swarm are behaving incorrectly
- Diagnose the underlying issue
Observing behaviours
We simulated a swarm of 20 robots exploring a 2D virtual environment. The robots continually made movement decisions based on their previous movement and status reports of nearby robots. Human observers could track the position of robots on a live map.
The simulations could run with all robots behaving properly, or with some robots showing faults or malicious behaviour. Faults included either faulty motors, meaning robots travelled at half their normal speed, or faulty sensors, meaning robots often collided with walls or other robots. Malicious robots included malicious blockers that prevented other robots from passing doorways, and malicious broadcasters of false status reports to misdirect other robots.
The human factor
Volunteer observers first watched simulations performed with well-behaved robots to become familiar with normal or ‘healthy’ swarm behaviour. They then observed simulations that were either normal or contained faulty or malicious robots. They were finally asked to identify whether all robots acted normally and, if not, whether robots were faulty or behaving maliciously.
The observers could also interact with some of the simulations by overriding the robots’ usual independent decision and giving direction commands to the whole swarm.
People provide unreliable warnings and poor diagnoses
The outcomes showed human observers often failed to identify a problem:
- Observers often reported a healthy swarm as behaving problematically;
- Observers struggled to identify a problem when there were faulty robots present;
- Observers were, however, generally able to report a problem caused by a malicious robot.
The results of observers offering a diagnosis were also mixed:
- Observers were generally good at identifying the presence of malicious blocker robots;
- Malicious communications were generally misreported as a fault;
- Observers struggled to identify that a fault was the underlying issue.
Observers who actively directed swarm movements were no better at spotting or diagnosing a problem than passive observers.
The future of swarm error detection
This work shows that identifying and isolating problems in robot swarms is a difficult task for humans.
This might be overcome simply by robots self-reporting additional information to observers, such as their movement and communication with other robots, or reporting on nearby robots.
Human operators will continue to be an important element of robot swarm search tasks, but it is vital that supervisors are supported with appropriate information to help them guide a swarm effectively and identify any problems.