
Safety and quality control of algorithmic systems in clinical decision making
Partners: Delft University of Technology & Utrecht University
Type: PhD
Duration: 2023—2026
Hospitals worldwide are facing an increasing demand for healthcare services, driven by population growth and aging demographics. This creates an increase in the administrative burden on doctors and clinical staff. Limitations of existing technology and processes create administrative burden for clinicians, which are exacerbated by the increasing service demand in combination with staff shortages. Both doctors and patients are optimistic about the potential of artificial intelligence (AI) tools to support the streamlining and automation of processes to keep healthcare affordable and accessible.
However, once implemented, AI tools are often falling short of achieving the desired clinical objectives. Many AI models introduced into hospitals for clinical decision support have not led to measurable improvements in patient outcomes. In some cases, they have contributed to negative consequences such as denial of care or misdiagnosis. In other instances, the introduction of these technologies has negatively disrupted established clinical processes resulting in an increase in administrative load on clinical staff.
Although many cases of these challenges have been documented, there has been little examination of why tools with such limitations are being developed and selected for implementation. To understand this, it is necessary to explore the values and value trade-offs behind decisions made in the design and selection of AI tools for clinical applications.
This research looks at the types of problems and objectives that clinical AI tools intend to address. It examines which problems and objectives are prioritised over others and how this reflects the values and value trade-offs being negotiated by different stakeholders, and the information being shared between them. These avenues of investigation aim to build an understanding of the conditions that lead to the development and implementation of AI tools that do not bring the desired benefits, and to understand what other values are in competition with quality and safety, leading to these conditions.
To perform this analysis, a sociotechnical lens will be applied to systems in which clinical AI applications are designed implemented. In the first project, a critical analysis will be conducted on published literature to determine how the problems to be addressed by clinical AI are prioritised and selected. The values and conditions underpinning these selections will also be examined. The following projects will apply mixed methods approaches to investigate how safety and quality risks are considered by clinical AI developers and to characterise how the interactions of different institutional influences affect safety consideration in clinical AI design. This research will be performed by applying analytic techniques from systems safety to examine how institutional influence form control structures that enable or prevent safety considerations in clinical AI design. This analysis will be applied to case studies for both private industry vendors and hospital in-house design contexts and will answer the following research question.
The aim of this research is to develop an understanding of how institutional influences and value prioritisation affect the functional abilities and limitations of clinical AI applications, and subsequently, their ability to achieve the desired benefits. This will inform the design and development of governance mechanisms at the level of developers, hospitals and policy makers to ensure the creation and implementation of clinical AI applications that are beneficial to hospitals, clinicians and patients.