Ethical Take-aways from Lombardijen Workshop on Visualizing Positive Safety

Blogpost for the AI-MAPS project by Marlon Kruizinga

On 21 and 22 October 2024, AI-MAPS organized two workshops in Lombardijen, Rotterdam, where researchers worked together with residents of Lombardijen to visualize a more subjectively safe and livable version of their neighborhood(s). These workshops were roughly divided into two segments: one segment where residents worked together to classify positive and negative images related to safety, which images had been taken by (other) residents of the neighborhood themselves (a participatory technique called ‘photovision’), and one segment where residents worked together with researchers to generate ChatGPT image prompts through summarizing and then utilizing an unstructured discussion on wishes for a ‘better’ Lombardijen. Marlon Kruizinga, PhD researcher on ethical aspects of AI in public safety, took part in these workshops and noted some of the major ethical aspects of the use of both imagery and generative AI in the context of these kinds of participatory workshops.

Ethical Aspects of Imagery in Participatory Workshops

The use of images in co-creative workshops, as either a focal point or a framing device, seems to both spark new discussion points within participants, and to drive conversation to more concrete places. So, for instance conversations about the "greenery" in the neighborhood can be more easily steered towards things like "what greenery should be placed where/how should the greenery look and be upkept" when actual images of the neighborhood and greenery are in play.​ This is primarily a pragmatic benefit. 

Ethically speaking, using imagery may also help improve inclusivity in participatory workshops, since it is an inclusive language that anyone can use to transmit ideas/sentiments to others with potentially minimal loss of context from person to person, regardless of mastery of a mutual verbal or written language. This is especially true of the use of photovision among neighborhood residents: all residents can participate, and all residents have access to the worldly context of what is represented in the pictures, because they live in roughly the same place. 

Another ethical aspect, and possible downside, of using imagery is that it also inherently directs and frames the discussion based on the limits of what is represented in the image: what can we see in the images and what do we not see? Both of these things may draw our attention and lead the discussion. For instance, as we found during the workshops, top-down arial images of cities/neighborhoods represent mostly the architectural, aesthetic and city planning-based aspects of pleasant and safe living in the neighborhood, which means that discussions centered on this imagery will tend to focus on those topics. This could potentially work to the detriment of other aspects if not accounted for, such as when we neglect to talk as much about social cohesion, public and private responsibilities, and neighborhood initiatives.

Ethical Aspects of Generative AI in Participatory Workshops

The generative AI used in the aforementioned workshops was ChatGPT, a large language model (LLM) created by OpenAI. As such, the ethical aspects of generative AI in participatory workshops must be clearly divided into ethical aspects of using generative AI in general, and ethical aspects of using the product known as ChatGPT. The use of ChatGPT during the Lombardijen workshops came in two forms: the use of ChatGPT for summarizing long, unstructured discussions in text, and the use of ChatGPT to generate prompts and images based on said discussion summaries.

ChatGPT proved during our workshops to be quite efficient for summarizing long, unstructured discussions on positive public safety, as they were transcribed in real time by one of the researchers (per group of participants). This could have positive implications for the discursive democratic values behind participatory and/or co-creative workshops focused on pleasant living, safety and policy. During such discursive processes, being able to summarize many different, unstructured points may very efficiently improve the focus of the discussion before moving on to a next phase.​ It may also help ensure that fewer opinions and comments in the course of a discussion are left behind when it comes time to move from discussion to decisions. The text-based functions of ChatGPT, and potentially LLMs more generally, could therefore be an ethically beneficial tool for participatory processes by promoting discursive democracy.

The use of AI for image-generation has, in general, the same potential inclusivity benefits, and the same drawbacks of framing discussions, that are generally true of using imagery for participatory workshops. AI image generation such as that of ChatGPT may also have similar efficiency gains as its text-based use. However, both ChatGPT and generative AI in general have their own unique ethical issues as well. ​The bias of databases may bleed through into generative AI’s output, both text and imagery. In terms of AI imagery, we noted during the workshops that ChatGPT may have a US-centric bias in how it represents architecture and the general aesthetic of city life. It also tends to be quite utopic/idealistic unless specifically asked otherwise. This kind of bias can become ethically problematic in a myriad of ways, but in the context of the Lombardijen workshops the main issue was that residents did not feel like most of the images were a realistic representation of their context of life and culture. ​More pragmatically speaking, it was clear that the ChatGPT’s imagery-outputs can still be relatively imprecise. It was not always possible to amend a specific part of an image, and requesting a specific addition through new prompts may lead to unintended changes, or fall short of the intended change (as opposed to if it had been done by hand). This impreciseness may work to the detriment of clarity in discussions, or in the formation of plans and solutions in policy-discussions.​

Finally, the use of ChatGPT, and the use of generative AI provided by private companies quite generally, for co-creative workshops in regard to policy needs to be ethically questioned in terms of how it makes users (such as municipalities, police, academia, etc.) dependent on private, international tech companies such as OpenAI. Especially if this method of using LLMs and image generation AI are considered to be used for citizen co-creation of policy in some systematic way, e.g. within a municipality, this dependence relation would seem to compromise the process in terms of responsibility and accountability. The municipality in question, for instance, could not answer for potential biases in the AI models that inadvertently enter the process, as we have already seen is possible. Private companies can also be expected to make changes to the AI model, or to increase the price, or to make any number of other decisions, based on a profit motive, without any regard for (and likely counter to) the discursive democratic needs/principles of policy-creation and participatory workshops.

Conclusion

The use of ChatGPT and photovision during the Lombardijen workshops points to several ethically positive- and negative aspects of using imagery, generative AI, and specific software such as ChatGPT in participatory workshops. In light of these ethical insights, it may be fruitful to further pursue the use of imagery, LLMs and other generative AI in participatory workshops, or other co-creative processes focused on matters of public (safety) policy. However, it also seems that experimentation with generative AI other than ChatGPT, as well as a shift away from privately owned generative AI in general, may be warranted, due to significant ethical drawbacks and risks. It will be highly interesting to see how the positive potential, as well as the risks, of generative AI in participatory processes further crystalizes in future trials.

Related content
Blogpost for the AI-MAPS project by Nanou van Iersel
Stakeholder day AI MAPS: Insights for Legal Aspects
Blogpost for the AI-MAPS project by Nanou van Iersel
Collage of images and texts about online media
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes