Stakeholder meeting – ELSA at work

Stakeholder meeting – ELSA at work

By Gabriele Jacobs.

Our first use-case triggered some questions we put forward to our wider consortium and sounding board, which consisted in this case of a quite diverse group of 20 people (i.e. scientists in the field of AI, start-ups, and Dutch and German police representatives). We had worried a bit if a hybrid setting would work out well for discussions, and even though we missed the informal interactions of our first consortium meeting, we nevertheless fell not short of lively discussions.

Our PhDs had prepared questions from their ELSA (ethical, legal, social aspects) perspective in the context of:

  • Extensive data collection through the BRP (basis registratie personen, personal records) database.
  • Sentiment analysis of social media/OSINT
  • Drone surveillance of police at work
  • Decision-making in the meldkamer (control room)

Several aspects came up from the discussion, which we will take to heart in our future work.

We reflected on the difference between “surveillance” and “monitoring”, and how AI might impact these concepts. Is more neutral monitoring still possible with AI, or does AI typically move into surveillance, by identifying seemingly deviant behavior? Another issue we discussed is the meaning of accountability for AI, and if accountability might increasingly be mistaken as a replacement of trust. So, might the strong focus on accountability in AI become a kind of religious belief in “transparency”, disregarding possible shortcomings and drawbacks? Is transparency the most important ingredient for trust, or what does it need more? Next to this we wondered how strong and independent the “human in the loop” actually is, and to which extend we as humans are already influenced by AI in our control of AI?

Another aspect was a reminder that the systematic consideration of different legal frameworks is important for the understanding of AI in public safety. More specifically we discussed the German stress on data protection and strict rules of surveillance of protests and public manifestations.

For me it was interesting to see that the more specific and concrete the questions our PhDs put forward were, the livelier and more controversial the discussion became. Thus, it needs concrete settings and contexts to unpack the values implied in AI-applications.

Wonderful to be part of such heated debates. To me this is ELSA at work.

Related content
ELSA 2.0
ELSA 2.0
Being in the field together
Being in the field together
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes