Blogpost for the AI-MAPS project by Kanan Dhru, Senior Lecturer LegalTech, The Hague University of Applied Sciences - With input from Sylvia I. Bergh, Senior Researcher, The Hague University of Applied Sciences
Artificial intelligence technology is becoming a key disruptor across different industries today and its use is fast proliferating. For the field of public safety, the use of AI technology has several important implications.
In its publication “Urban future with a purpose”, Deloitte announced 12 trends shaping the future of cities by 2030. One of the trends focuses on surveillance and predictive policing through AI. The report delves into how cities across the world are using AI technology to ensure safety and security for their residents while also ensuring privacy and fundamental human rights. The report presents the trends more as an ideal and argues that a successful implementation of AI would mean maintaining this critical balance between security and privacy and freedom.
It quotes a study by McKinsey Global institute from 2018 entitled “Smart cities: Emerging tech that can make smart cities safer” that shows how smart technologies such as AI can help cities reduce crime by 20 to 35 per cent. A series of other reports (such as the ones by IDC and ESI) point to the fact that the use of AI in real-time crime mapping is increasing rapidly. In addition, facial recognition, use of biometrics and body cameras for police are successfully being used to address and prevent criminal incidents and activities.
Big data can be of critical importance in recognizing patterns and predicting certain behaviours in people. AI systems provide newer and deeper insights to enforcement agencies, who have started to see themselves relying on these tools for their job deliverables. While the reduction in crime statistics can be considered good news from many standpoints, important questions must be asked as to what the larger price is that is being paid for achieving these outcomes and by whom.
The application of AI tools in surveillance and predictive policing requires a thorough investigation of its ethical, legal and social dimensions. The invasive nature of these technologies in collecting people’s private data, in manipulating their behaviour towards shaping certain thoughts and decisions, as well as in guiding social interactions is beyond any precedent. While McKinsey’s aforementioned report talks about responsible application of AI with emphasis on regulation, a more substantial policy focus is needed at national and international levels, especially considering the fast uptake and growing use cases. The report highlights use cases of surveillance technology that include the Japanese police’s use of AI in predictive policing during the last Olympics and Singapore police force’s use of wearable technology such as smart glasses with video feeds during Covid-19. Rio de Janeiro’s experimentation with developing an app to assess location-based frequency of crimes has shown impressive accomplishments in addressing crime rates.
While implementing surveillance tools, the city authorities have to constantly weigh whether the public safety benefits that occur for the community make up for the individual freedoms that get compromised. The report emphasises that with the growing deployment of these tools, it is crucial to communicate effectively and clearly with the users about different facets of this technology, especially about how their data is being collected and used.
There are no easy answers to achieving the right balance between public safety and protecting the privacy of people. But bringing different voices together around the table is key. The AI MAPS project focuses on some of these very facets to public safety, especially from the ethical, legal and social aspects (ELSA). Through research, discussions and developing knowledge tools, the consortium works towards creating tools and insights to further human well-being.
- Related content
- Related links
- Overview blogposts | AI Maps