In a world where technological innovations are increasingly becoming the norm, the question arises of how to involve all stakeholders in decision-making around these developments. From social media to artificial intelligence, these technologies transform how we live, work and do business but also have profound social impact. In this article, Marlon Kruizinga, PhD student at Erasmus School and Erasmus School of Philosophy, talks about the role of participation and expertise in socially disruptive technologies and argues why 'ordinary citizens' need to be involved.
AI-MAPS
Kruizinga focuses his research on the Ethics of AI in Public Safety as part of the AI-MAPS (AI in Multi-Agency Public Safety) research project. The context of this research involves collaboration with various stakeholder groups, including the national police, municipalities, private companies, local populations and NGOs.
Socially disruptive innovations
Sometimes, new technology is introduced that turns the world upside down and completely transforms how we work, live, and do business. However, it also causes major changes in social structures, behaviours, and interactions within society. Think, for instance, of social media or Artificial Intelligence (AI). Such technologies are better known as socially disruptive innovations.
The literature on socially disruptive technologies often discusses principles such as democratisation, co-design and co-creation. "The idea behind this, roughly, is that all stakeholders who are to develop, use, or will be affected by a technology should have a say in what the technology will look like", Kruizinga says. Participation can range from determining which technology is appropriate for a given situation, designing a technology, or shaping its implementation. "Participation can help ensure that all interests in a new technology application are represented and considered and that the eventual implementation of the technology is successful. However, there is a potential bottleneck in the expertise factor", Kruizinga says.
Bottleneck
A common issue in discussions, both academic and societal, is whether all stakeholders involved have enough expertise to meaningfully participate in discussions about new technologies. Kruizinga: "The most dominant issue in this respect tends to be technical expertise: do the various groups have the necessary technical knowledge to make intelligent, conscious choices about the design or use of a new technological application - for example, an AI system? After all, if one does not know what a machine does, how it is put together, and how it is possible to malfunction, it seems difficult for them to assess the benefits, drawbacks, and risks of using the machine."
Stakeholder group 'the citizen'
Kruizinga further explains the abovementioned problem using his experience and insight from the AI-MAPS research: "In the context of the AI-MAPS research project, current and potential AI applications for public safety in the Netherlands are being investigated. Here, the main stakeholder groups are usually the police, the government (often municipalities), citizens of the Netherlands, and private companies developing or using AI. AI applications for public safety can be developed and implemented nationally. "But often specific environments, neighbourhoods, or other security contexts that not everyone in the Netherlands deals with turn out to be the biggest source of interest for AI applications in particular", Kruizinga explains. He continues: "For example, the stakeholder group of 'citizens' is often filled by 'local citizens', or another section of the Dutch population, and their participation is focused on a specific AI application in their environment or context of life. Think, for example, of football fans and possible facial recognition in stadiums, or residents of a neighbourhood on which risk analysis software could be unleashed."
Clearly, these groups have a stake in the decision-making process on new technologies. However, when it comes to involving all stakeholders, it is often said that 'ordinary' citizens do not know enough about the technology to make meaningful contributions to its design or desirability. While it is possible to explain or train citizens, this can slow down the development process. "In this way, in the debate around public participation, including citizens, quickly becomes more of an option than the norm", Kruizinga explains. "Technicians would have the most meaningful input, and these are - depending on the AI project in question - in the private sector or under the umbrella of the police or the municipality. The participation of these parties is thus immediately more assured, adding that police and government - as respectively the enforcers and shapers of public safety - often automatically already play a role in the decision-making and design process around public safety technology." Thus, the factor of expertise, as well as the social role of police and government, seem to combine to ensure that the participation of 'ordinary' citizens is especially seen as optional in this context.
Everyone's expertise
What is often overlooked here, according to Kruizinga, is that all the parties involved - from the government to citizens to the private sector - lack certain expertise that is needed to have a meaningful say about new technology in a given context. And this is not just about technical knowledge: each group of stakeholders actually has its own lack of knowledge. Take, for example, the case of AI for public safety in a particular neighbourhood. "Technicians know most about the AI itself but relatively little about what goes on in the neighbourhood," Kruizinga argues. "Policymakers may know a lot about what is legal or practical for municipal administration but may lack technical and contextual expertise. Citizens lack technical expertise, but complement contextual expertise: about what the problems and possible solutions for safety in the neighbourhood are."
Participation in new technologies is thus not only ethical (that all stakeholders deserve a voice) but also epistemological, in that all parties bring their knowledge and complement each other to form a complete picture of problems and solutions. Participation thus ensures that technology developed or used in a specific situation is actually successful. "Successful not only because no one wants to oppose the technology, but also because everyone has contributed to ensuring that the tool serves its intended purpose: improving our shared lives."
There could be a shift in conversations about participation in technological applications, both in the case of AI for public safety and other situations. "It does not make sense to doubt the expertise of citizens mainly and to see their participation as optional", Kruizinga argues. "This wrongly subordinates the expertise of citizens about, say, their living environment or their problems to the expertise of policymakers, technicians, enforcers and so on. Given that each of these groups has a gap in their expertise, from both ethical and epistemological perspectives, we should consider all stakeholders equally when deciding on new technologies."
- PhD student