In December 2024, the book 'Digital Governance: Confronting the Challenges Posed by Artificial Intelligence' was released. The book is edited by PhD candidates Kostina Prifti, Esra Demir, and Julia Krämer, together with Klaus Heine, Professor of Law and Economics, and Evert Stamhuis, Professor of Law and Innovation, all affiliated with Erasmus School of Law. The book discusses various potential societal impacts of Artificial Intelligence (AI) and explores how laws and regulations can address these challenges. In this article, Prifti, Demir and Krämer provide a preview of their upcoming book.
Prifti explains: "The application of artificial intelligence (AI) technologies is becoming increasingly prevalent across a diverse range of fields, giving rise to a multitude of implications for our lives. These effects may present, on the one hand, opportunities. On the other hand, they may give rise to challenges that directly impact our fundamental rights and freedoms." He points out that the consequences of AI can be seen across different sectors: "For example, the book discusses the impact of AI in healthcare, education, public administration, legal certainty, the rule of law, and much more."
AI and healthcare: Does AI disrupt medical liability?
An example discussed is healthcare. According to Demir, the impact of AI on society is particularly visible in the healthcare sector. She explains that AI technologies, such as clinical decision support systems, are designed to assist doctors in diagnosing and treating diseases, thereby improving the quality of healthcare. However, legal concerns also arise with the use of clinical decision support systems. One of the chapters in the book addresses the question of whether the use of AI in these systems disrupts the necessary causal link in liability law — that is, whether it becomes more difficult to determine whether the harm a patient has suffered is actually a result of an error in the AI-assisted decision, or if the responsibility lies with the doctor or the healthcare institution. This raises questions about who is liable if an AI system makes an incorrect diagnosis or recommends inappropriate treatment and whether the patient is entitled to compensation if the mistake was caused by AI rather than a human error. "One possible societal consequence of this debate is, inevitably, whether the integration of these technologies deprives injured patients of seeking redress. The chapter points out that potential redress options that should normally be available to injured patients or victims may be excluded," says Demir.
Legislation and AI: How do they work together?
Prifti states that implementing AI technologies can significantly impact existing legislation because their use may expose gaps in current laws. He explains: "This could result in calls for amendments to existing legislation or the introduction of new regulatory frameworks." Prifti clarifies that AI applications often require new laws or guidelines, but AI can also help enforce existing laws more effectively when addressing new challenges.
The book discusses how current legislation can assist in solving problems in different situations and how new legislation can respond to this. Prifti mentions: "One example is the use of facial recognition technologies in education." He explains that one of the book's chapters discusses how the General Data Protection Regulation (GDPR) and principles such as proportionality and children's rights could be applied in this case. Another example is the workplace. According to Prifti, existing legislation could be used more effectively in this area: "The current provisions of the GDPR are underused in the protection of employee rights. The book assesses how current data protection law can reduce the risks involved and improve conditions for employees."
When is digital governance good governance?
The book also explores the relationship between AI and good digital governance. Digital governance refers to how governments, organisations, and other institutions use technology — and, in this case, AI — to develop, regulate, and implement policies. Krämer says: "Interdisciplinarity plays a crucial role in digital governance." She explains that this is an important point that recurs throughout the book. Combining insights is critical for effective digital governance. Interdisciplinarity ensures a nuanced approach, which, according to Kramer, is impossible if digital governance is only examined from a single discipline. "When a researcher brings together insights from law, ethics, computer science, and even sociology, they are able to address digital governance in a way that's both comprehensive and practical. It helps avoid oversights that occur when we focus too narrowly. This approach ensures that the conversation around digital governance remains balanced. Whether it is one person or a team, drawing from multiple fields helps craft solutions that are more robust, grounded, and considerate of both the technical realities and human impact. In this way, interdisciplinary thinking enriches the discussion and leads to better, more sustainable governance frameworks."
Challenges of AI
Finally, we asked Krämer what practical challenges policymakers and legislators face when regulating AI. Krämer explains that the book discusses not just one but many more challenges — more than can be addressed in this article — that need to be considered. She shares: "One of the main challenges is that AI systems, while not inherently intelligent, are incredibly adaptable. They process vast amounts of data and make predictions or decisions based on patterns that are not always obvious to humans. This adaptability complicates regulation because policymakers are trying to anticipate and manage risks that evolve as the technology itself evolves. So, it is not just about regulating a static tool but something that continuously changes, often in ways that are hard to predict."
- More information
The book wil be launced during the SSH Breed Annual Conference. For more information and registration click here.