När maskiner fattar beslut vem är ansvarig?
Robots making decisions on social benefits, driverless cars causing traffic accidents, search engines presenting a selected narrow picture of the world – the rapid development of AI technology gives rise to machines that makes their own decisions, without direct influence from humans, but who is responsible for what a machine does? Can the machine itself be responsible? The aim of this article is to discuss and problematize responsibility relations when machines make decisions. The overarching question is whether machines can be responsible and if so, under which circumstances. Drawing on theories on responsibility, machine ethics, robot philosophy, and on recent AI development, the article demonstrates how functionalistic arguments can lead to the conclusion that machines are responsible for their actions, while approaches building on philosophical understandings of autonomy and agency rules out machine responsibility. Unless the machine is conscious, human actors always need to be responsible for decisions taking by machines. However, as self-improving systems increase machine autonomy and decrease human control, the question is raised whether we are witnessing an emerging responsibility gap, or if this development rather describes a situation of blurred responsibility, in which responsibility needs to be distributed between many different actors – AI developers, programmers, distributors, users, policy makers.