The human factor in AI-assisted image analysis

We studied the response of human operators performing image analysis to different levels of artificial intelligence (AI) assistance. High levels of AI that delivered outcomes quickly and accurately were viewed positively but also resulted in users feeling disengaged and changing their behaviour in ways that could reduce operational effectiveness. Users found minimal AI difficult to work with at times, due to the number of errors it generated. Addressing these factors is essential to improve how human-machine teams perform image analysis.

White drone (also know as an UAV: unmanned aerial vehicle ) with four rotor blades flying in blue sky

Project team: Mark Chattington, Ciara Beattie and Angus Johnson

AI in image analysis

Object identification in images can be a painstaking task for people, with some tasks requiring hours of repetitive work in front of a screen. Artificial intelligence (AI) tools now offer powerful assistance in image analysis through automatic recognition and labelling of objects. However, the impact of AI image analysis assistance on human operators is poorly understood, despite this factor being vital to the overall effectiveness of a human-machine team.

We have responded to this by investigating how image analysts operate with and respond to AI assistance. Our findings provide insight into how to design human-machine teams for reliable and efficient image analysis tasks.

Observing the analysts

Our Human-Machines Teaming research is guided by user testing, not technical feasibility, to develop systems that operate optimally.

We used Thales’s Training and Semi-Automatic Labelling Tool (T-SALT) to study how twelve image analysts responded to three levels of AI assistance:

  1. Manual: Human user labels all objects.
  2. Medium AI: Automatic object identification after limited (single pass) learning. User then checks and corrects labels and adds any missed objects.
  3. High AI: All labelling is automatic. User checks and corrects labels.

Objects included people and different types of vehicles. We used eye-tracking technology to monitor which parts of an image users looked at and surveyed users after the tasks to learn how they responded to the AI assistance.


High AI assistance gave the best results in terms of speed, probability of detection, and labelling accuracy. High AI also resulted in users overlooking large areas of images and developing a confirmation bias from the AI identifications. For example, users started to relabel objects they had previously analysed based on the AI’s response. These changes to user behaviour could lead to areas of images not being checked by a human operator, and manual relabelling with incorrect identifications.

Due to the number of misclassifications that needed correcting manually, Medium AI was reported as the least usable level of support and worse than no AI support. This shows the perils of introducing technology that isn’t robust. In the long term, users are likely to find workarounds that diverge from agreed protocols.

Increasing levels of AI support decreased users’ ‘sense of agency’, i.e., the extent to which they felt part of the human-machine team and responsible for the outcomes. This could also lead to reduced performance, lower satisfaction, and more workarounds over time.

Future AI-assisted image analysis

AI assistance has great potential for improving image analysis but is often designed and specified based purely on the performance metrics of the available technology. This overlooks the significant impact the technology has on user experience. Indeed, this human factor means introduction of poorly considered or limited AI support might deliver no long-term benefit compared to manual image identification.

Future AI image analysis tools need to consider the user experience, including the design of appropriate interfaces and improved understanding of the tasks of an image analyst. Technology might also improve the human response to AI. For example, the eye tracking technology used in our study could be used to alert an operator to image areas they have overlooked.

The outcome of our work is clear: rather than relying on the technology alone, improved object identification through AI assistance requires considering the response of human co-workers in the system design process.