In an era where artificial intelligence (AI) is becoming increasingly prominent in various sectors, including finance, the use of this technology raises complex ethical issues. Joris Krijger started a hybrid research project in 2019 in which, on the one hand, he is writing a PhD dissertation at EUR on the topic of AI and ethics and, on the other hand, he was appointed as Ethics & AI officer at De Volksbank to help navigate these ethical issues. As Krijger emphasizes, AI models can be both promising and problematic. They can, for example, effectively detect fraud, but also wrongly identify innocent people as fraudsters. The question then arises: what do we consider an appropriate balance in 'adjusting' the models, and how do we justify those choices?
In response to an earlier warning from Aleid Wolfsen (chairman of the Dutch Data Protection Authority) about the widespread use of discriminatory algorithms by government agencies, Krijger wrote in an op-ed in de Volkskrant last week that the discussion surrounding AI models must be more nuanced. According to him, the problem lies not so much with incompetence or insufficient ethical awareness within institutions, but with a fundamental misunderstanding of how algorithms work, in which their design and efforts to distinguish specific groups as best as possible play a major role. Think for instance of an early distinction between people who may get into financial problems, or a distinction between people with a possible high risk of fraud. In the latter category, the more precisely you fine-tune the algorithm, the more potential fraudsters you could catch. But these datasets also select more innocent people, with all the possible risks that entails.
Krijger therefore argues that we should not view algorithms as neutral technologies, but as systems imbued with subjective choices, such as which risk factors are important. These technologies are essentially “opinions dressed up in code,” as mathematician Cathy O'Neil puts it, requiring an ongoing critical attitude toward the underlying assumptions and ethical implications of AI. Proper use of AI models is therefore not just about technical precision, but also needs an in-depth social discussion about which values should guide our digital future.
The growing use of AI requires what Krijger calls a 'mature' approach to ethical assessment within organizations. Both in his work and in his research, he emphasizes that discriminatory algorithms are not only a technological problem, but also an organizational and social issue. A broader discussion is needed about which values should be central to the use of AI and how these can be translated into justified forms of distinction. Only by having these uncomfortable discussions, Krijger argues, can we ensure a fair and ethical application of artificial intelligence.
This article is a summary of two recently published articles.
Read the entire article on the website of de Volkskrant (Dutch).
Read the entire article on the website of Het Financieel Dagblad (Dutch).
- More information
For more information or press requests, please contact faculty press officer Eddie Adelmund (Adelmund@esphil.eur.nl)
- Related content