The Limits of Predictive Policing: “Human behaviour is not as predictable as people think”

Predictive policing, risk assessment, and artificial intelligence in law enforcement: Marc Schuilenburg, Professor of Digital Surveillance at Erasmus School of Law, has been researching the role of technology in crime detection and prevention for years. In this two-part series, we look both forward and backward with him: how new are AI applications in policing, what makes them ethically and practically problematic, and is a positive use also possible? In this first part, we explore the question: Where are the dangers of predictive policing?

The UK Ministry of Justice is currently running a pilot project to explore how AI can be used to improve public safety. According to The Guardian, various types of personal data are being analysed in this project to predict who might be at higher risk of committing serious violent crimes. Strikingly, mental health data is also included, such as information about susceptibility to addiction and previous episodes of self-harm or suicidal tendencies.

“Here we go again,” was Marc Schuilenburg’s first thought when he read the report. As Professor of Digital Surveillance, he has long warned about the risks and limitations of data-driven risk predictions in policing: “Human behaviour is not as predictable as people think. There is rarely an objective way to use these models responsibly.”

From classic algorithms to self-learning AI

Predicting crime using algorithms is not a new phenomenon. In 2012, the Los Angeles Police Department introduced the PredPol system, which predicts where crimes are likely to occur within the city. Since 2014, the Netherlands has used a similar system, the Criminality Anticipation System (CAS), to identify crime patterns. The effectiveness of such systems is a topic of academic debate: evaluation of PredPol showed positive results, while evaluations of CAS found no measurable impact on reducing crime.

Visualisation Criminality Anticipation System (CAS)

However, the UK government’s plans to use AI represent a technical step forward, Schuilenburg says. “Traditional risk assessment tools are still somewhat manageable; they work with verifiable if/then rules and a limited number of variables. AI models, on the other hand, are self-learning, dynamic, and opaque. It’s often impossible to trace how the system reached a particular outcome. That makes them far more complex.”

Visualisation 'If/then' algorithm vs Predictive AI

From locations to individuals

In addition to the opaque nature of AI, the UK project goes further in another way. While systems like CAS focus on predicting risk areas, the UK pilot targets individuals. Sensitive personal data is used to determine who is most likely to commit a crime.

According to Schuilenburg, this shifts the fundamental risk. “Most concerns arise when predictions are made about individuals – in principle, this can apply to anyone, including people not suspected of any crime. Think about the right to privacy, stigmatisation, and the processing of special categories of personal data, such as ethnic origin, medical information, or political beliefs. These risks are less present at the area level, although critical questions still remain.”

Another issue is data quality. “Which variables do you need, how do you measure them, and what is their impact? These questions often go unanswered. Some citizens are also overrepresented in datasets, often people in vulnerable positions. This creates a self-reinforcing effect: the more data there is on them, the more likely they are to be flagged again.”

The forgotten issue: feasibility

Besides ethical and legal concerns and data quality, Schuilenburg points to a frequently overlooked aspect: practical feasibility. “Imagine an AI system identifies sixty people in Rotterdam as potentially high-risk. What then? Do you place sixty officers at their doorsteps? For how long? A week, three months? It’s simply unworkable.”

A key lesson, Schuilenburg emphasises, is that technology never stands alone. It also requires an organisation to work with it. According to the professor, practical feasibility is structurally ignored. “Technology is presented as something neutral or autonomous, something that naturally leads to results. But in reality, it is always embedded in a social system. The citizens targeted by predictions, the professionals who must act on them, and the police organisation that changes because of this – they are often missing from the public debate.”

Ultimately, the police themselves are not helped by introducing these kinds of AI systems, Schuilenburg concludes. “It’s the frontline organisations that face the harshest criticism when they fail to respond to alarming signals. But you can’t hold people accountable for something that is practically impossible.”

Using AI for positive change

Although Schuilenburg raises critical concerns about using AI to predict crime, he also sees possibilities for a fundamentally different approach. As a member of a European research consortium, he is working on developing AI tools not focused on control, but on strengthening the connection between police and society. How AI can play a positive role in policing will be explored next week in part 2 of this series.

Professor
Related content
This article is about Schuilenburg's appointment as member of the Scientific Advisory Board for the Police.
portretfoto van Marc Schuilenburg
Marc Schuilenburg, Professor of Digital Surveillance at Erasmus School of Law, explains the use of big data by the police.
Marc Schuilenburg
Marc Schuilenburg and Martijn Wessels have received a grant for their European research on digitalising community policing.
Schuilenburg (links) en Wessels (rechts)

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes