PhD researcher: Joris Krijger
Supervisor: Prof. Jos de Mul; Prof. Evert Stamhuis (ESL)
Funding: Volksbank, Trust fund
Artificial Intelligence (AI) systems are widely used in today’s society, informing and sometimes executing decisions in domains as diverse as finance and criminal justice. Recent years have seen a rapid advent of these systems in public and private spheres making AI a disruptive technology of global importance. However, its dramatic increase in usage notwithstanding, some vital aspects of AI are still undetermined. The most prominent of these issues concerns the meaning and exercising of responsibility in AI-driven systems (e.g. O’Reilly, 2017). The possibility of exercising responsibility seems to be compromised in a dynamic where decisions are delegated to intelligent data systems, as was best exemplified by the financial crisis of 2008. Here AI-based innovations, such as High Frequency Trading, were deployed by banks to perform millisecond transactions that dramatically increased volatility. In the aftermath of the crisis it became apparent that these systems contributed considerably to the judicial inability to impute liability and to the diminished sense of responsibility among bankers. My main research question is: how can we deploy AI in a way that fosters inclusive prosperity? Since this broad question can be answered in various ways I will focus on the pertinent question of responsibility in relation to AI and explore two aspects of this question, (1) the kind of agency that can be attributed to autonomous artificial systems, and (2) the consequences for the meaning of responsibility when we attribute agency to AI.