In two earlier blog posts (please see the ‘Related content’ section elsewhere on this page), we talked about ways to integrate Ethical, Legal, and Societal Aspects in the development and deployment of AI systems (ELSA), and about ways to organize and promote transdisciplinary collaboration, i.e. between people with different expertise, backgrounds, and roles. In this current blog, we will share some of our findings from trying-out these approaches in practice.
Blog by Dr. Marc Steen
ELSTA: Ethical, Legal, Societal, and Technological Aspects
Regarding ELSA, we chose to add Technology to the mix of perspectives. So now we have ELSTA: Ethical, Legal, Societal and Technological Aspects. Technology is not put centre stage—as is done rather often, e.g., ‘impact assessments’, where the researchers look at the impacts of technology on society (in an ethical, a legal, or a societal sense). Such assessments, however, fail to appreciate that the relationship between technology and society is a reciprocal one. Society (ethical, legal, and societal aspects) also influence the development and deployment of technology. Therefore, we put Technology in the same category as Ethics, Law, and Social. What we put centre stage, instead is: some slice of the real world, i.e. some specific phenomenon that happens. And then we look at that slice of the real world through these four lenses: Ethics, Law, Social and Technology.
In the course of 2023 we—and especially the PhD students—carried out empirical studies (field work) as well as theoretical studies, focusing on a series of climate protests in The Hague. (Please have a read at the work done by Nanou van Iersel or Marlon Kruizinga) These are interesting because they involve a wide range of ethical, legal and societal questions, and also involve all sorts of technology, both on the side of the protesters and on the side of the city and the police.
Our ELSTA approach enabled us to look at such a complex phenomenon through different lenses. This brings us to another element of our approach: transdisciplinary collaboration.
Collaboration between people with different backgrounds, and roles
As the term suggests, transdisciplinary collaboration refers to promoting and facilitating collaboration between people with different disciplinary backgrounds. We feel, however, that we need to understand that sufficiently broadly. We were looking for diversity not only in terms of disciplinary background, but also in terms of background and role.
In our fieldwork, we not only brought together people from different disciplines—in our case: ethics, law, social science, and technology—but also people with different backgrounds and roles. For example, our fieldwork involved collaboration with people from the municipality of The Hague and Rotterdam, people from different departments of the police, people from large and small companies. Some people work in offices and contribute to local policy making, e.g., in a municipality. Others work as a police officer, walk the beat, and contribute to keeping streets safe. Some people need to work within boundaries and use all sorts of improvisation and discretion in their work, e.g., as a project manager. Others are required to think ‘outside the box’ and conduct experiments to see how technologies might work in practice, e.g., in a start-up company.
People from such different disciplines, with different backgrounds, and with different roles all have can contribute to the responsible design and deployment of AI systems. We will continue to invite diverse people around the table. Even though, we know and appreciate that this is not an easy task.
These two elements (ELSTA and transdisciplinary collaboration) come together, for example, in our fieldwork. Which is at the heart of the AI-MAPS project: diverse people going into the field, having new experiences and trying to make sense of these—and trying to make the world a better place. This, we propose is what responsible innovation is about!
- Related content
- Related links
- Overview blogposts | AI Maps