On Wednesday 12 June, the Technical Session on the use of future technologies in the field of security took place at the City Hall of the Municipality of Rotterdam. The aim of the session was to provide councillors with knowledge about and the opportunity to ask questions regarding the use of AI and data-driven tools in the context of security. Marc Schuilenburg, Professor of Digital Surveillance at Erasmus School of Law, was invited as a guest speaker and advisor. He stressed the importance of a balanced approach that includes both technological and ethical considerations.
Schuilenburg started his advice by highlighting the complexity of using AI and data-driven tools in the context of security issues. He referred to the childcare benefits scandal and other discriminatory algorithms to illustrate that technological knowledge alone is insufficient. "There is a need for legal and ethical knowledge, as well as experiential knowledge, to prevent technology from leading to negative effects such as discrimination and lack of transparency," according to Schuilenburg.
There is more than efficiency
Schuilenburg identified three sets of public values that are essential to consider when using AI in security applications: safety, legal principles and procedural aspects. Safety, efficiency and effectiveness fall under the driving values and are essential for organisations such as the police. Legal principles include non-discrimination of technology and respect for privacy. At the same time, processual values such as accountability and transparency are also necessary for the proper deployment of AI in security applications.
From negative impacts to robust frameworks
Schuilenburg pointed to several adverse effects that have arisen in the deployment of AI in recent years. Among others, he mentioned the dangers of 'over-policing' and stigmatisation due to predictive systems. These examples highlight the need not only to rely on technological solutions but also to think more carefully about the social, legal and ethical implications of AI tools.
To address these issues, Schuilenburg introduced the "four T's": Target, Tracked, Talked, and Tested. These four questions provide a framework for the design and evaluation of AI systems:
- Target: Does this problem need AI?
- Tracked: How do we ensure that AI systems comply with laws and regulations, both national and European
- Talked: How do we integrate experiential knowledge into the design of AI?
- Tested: Is there evidence that AI tools are effective?
Smart lampposts in Lombardijen proven to be unsuccessful
Schuilenburg also gave practical examples to illustrate his arguments. For example, he mentioned the smart lampposts in Lombardijen, a neighbourhood in Rotterdam, which were supposed to reduce residential burglaries. However, they were very quickly removed as requested by the residents of the neighbourhood. This example highlights the importance of early community involvement and testing before large-scale deployment.
Moreover, Schuilenburg mentions the danger of buying AI and data tools from private tech companies. "Not only do you buy equipment, but the data analysis is often left to private parties. Then you face the danger of so-called vendor lock-in, when a public organisation is bound to a private organisation that ultimately has only one goal: to make more profit."
Schuilenburg was questioned by councillors about the role of ethics in the deployment of AI applications. He stressed that ethical questions have not lagged and have always remained relevant: "Proportionality, subsidiarity, transparency and accountability are classic issues that are still of great importance when using new technology."
What form of AI should we deploy in Rotterdam?
Here, Schuilenburg pointed out the importance of timely evaluation. The professor is amazed that many data-driven tools and AI tools are deployed in practice without any evidence that they work. "We are quite knowledgeable about the effectiveness of new technological applications. The least you can do as a public party is check whether the tool is effective."
However, evaluating is easier said than done, according to Schuilenburg. After all, a self-learning algorithm is constantly evolving. Schuilenburg: "The classic way of evaluating the output, therefore, no longer suffices. You must start evaluating at the input stages- in the datasets."
"AI must remain equitable and transparent"
Schuilenburg's advice is a call for a more integrated approach to technology in the public sector. He stressed that AI and data-driven tools should not only be introduced based on technological 'progress' but also meet legal and ethical standards. His arguments provided the city council with a framework to better assess future decisions regarding the implementation of AI and data-driven tools in the context of security.
"This balance between technological progress and ethical and legal accountability is crucial to ensure that AI not only contributes to greater safety and efficiency but also remains fair and transparent," according to Schuilenburg.
- Professor
- Related content