The past decade has seen efforts to develop new forms of autonomous systems with varying applications in different domains, from underwater search and rescue to clinical diagnosis. All of these applications require risk analyses, but such analyses often focus on technical sources of risk without acknowledging its wider systemic and organizational dimensions. In this article, we illustrate this deficit and a way of redressing it by offering a more systematic analysis of the sociotechnical sources of risk in an autonomous system.
View Article and Find Full Text PDFWith the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers' understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical users who will be required to use them in patient care.
View Article and Find Full Text PDFPublics and policymakers increasingly have to contend with the risks of complex, safety-critical technologies, such as airframes and reactors. As such, 'technological risk' has become an important object of modern governance, with state regulators as core agents, and 'reliability assessment' as the most essential metric. The Science and Technology Studies (STS) literature casts doubt on whether or not we should place our faith in these assessments because predictively calculating the ultra-high reliability required of such systems poses seemingly insurmountable epistemological problems.
View Article and Find Full Text PDFThis paper looks at the dilemmas posed by 'expertise' in high-technology regulation by examining the US Federal Aviation Administration's (FAA) 'type-certification' process, through which they evaluate new designs of civil aircraft. It observes that the FAA delegate a large amount of this work to the manufacturers themselves, and discusses why they do this by invoking arguments from the sociology of science and technology. It suggests that - contrary to popular portrayal - regulators of high technologies face an inevitable epistemic barrier when making technological assessments, which forces them to delegate technical questions to people with more tacit knowledge, and hence to 'regulate' at a distance by evaluating 'trust' rather than 'technology'.
View Article and Find Full Text PDF