Sense of agency
Team building is a common activity across many sectors but assumes the team is composed of humans. What happens when an autonomous system is the newest team member?
Project team: Debora Zanatto, Mark Chattington, Jan Noyes and Vicky Steane
A sense of agency
Automated systems are common in safety-critical environments, including the transport, maritime, and nuclear sectors. However, any automated system requires human monitoring, in part so that safety measures can be taken if the automated system fails.
It is essential that human supervisors act appropriately when this happens, which means they must have a ‘sense of agency’ in their work task, i.e. perceive they are in control of their task outcomes.
Humans, however, are poor at monitoring tasks and often become distracted when in these roles. A reduced sense of agency increases the risk of a human supervisor responding inappropriately if a system malfunctions.
Our research has shown how levels of automation and cognitive workload affect a human worker’s focus, and how the appropriate design of human-machine interactions can keep workers engaged.
We measured the sense of agency of people performing tasks that involved different levels of automation while also performing an additional task of greater or lesser mental workload.
We found that participants’ sense of agency reduced as the level of task automation increased, and disappeared entirely when the task was almost fully automated. Those undertaking non-automated tasks remained engaged even when they were required to do a second task that required a high mental workload, whereas this second task caused those given even semi-automated tasks to lose their sense of agency.
This multilevel and multifactor approach gives nuanced understanding of how complex tasks may affect a worker’s ability to remain engaged with an automated system.
Intriguingly, sense of agency could be recovered by careful design of the human input requirements to a hybrid human-robot system. The key here is timing. In a process in which a human responds to an automated prompt, a sufficient delay in the required human response can see their sense of agency return.
We have gained even greater insight into these effects by automating different types of decisions within a process:
- what action the human should take;
- whether a human action should be done;
- when the human action should take place.
We found that the loss of sense of agency was most significant with automation of ‘what’ and ‘whether’ decisions.
In practice, hybrid human-machine systems could be designed to maintain the engagement of the human worker, and so maintain system safety and trustworthiness.
This requires that sense of agency is considered as part of hybrid human-machine teaming systems at the early design stage and includes input from psychologists and user-group representatives.
Future research in more applied situations will help build a more robust approach to designing safer, more reliable human-machine teaming systems.
Our experiments required people to estimate the length of a computer-generated tone by pressing keys on a computer keyboard.
Participants who were instructed by the computer when to respond lost a little sense of agency compared to those with fully voluntary actions.
Those who were instructed what keys to press or whether to run the test at all showed greater disengagement from the task.