Reflection, Deliberation and Conversational Agents

Blogpost for the AI-MAPS project by Michaël Grauwde
Students
Pixabay

Reflection is a very loaded term. As soon as we hear it, we imagine many different things, making reflection very hard to define. Reflection can be seen from many different perspectives, but from our perspective, we see it as a form of thinking. This makes creating an agent that supports reflection, a fascinating but difficult task.

But it is important to explore as conversational agents already hold sway in influencing choices and preferences of people. In fact, there has been research on the ability of conversational agents to impact people’s behaviour. We have known for a long time of the impact of agents on people. Since, for example, Joseph Weizenbaum’s ELIZA conversational agent, which was the first chatbot of its kind, people felt like they were interacting with an intelligent system. In reality, ELIZA was simply using a pattern-matching technique and identifying keywords that were input into the system and replacing them. However, at the time, the system which in its most famous example was used as a Rogerian psychotherapist; simply “reflected” the person’s questions back onto them. So, reflection in some sense, has long been intertwined with chatbot design. Since, then many conversational agents have been used to influence behavioural change in some way.

Reflection

Reflection, however, remains a difficult task as it is very difficult to break down. What is reflection? Well one definition that is out there comes from American philosopher and psychologist John Dewey, who stated that reflection is “active, persistent and careful consideration of any belief or supposed form of knowledge in the light of the grounds that support it and the further conclusions to which it tends.” This definition sees reflection as a more concentrated approach on a particular matter at hand in the light of new information being introduced. Our work is particularly interesting then as we are analysing how a technology like a conversational agent can assist in carefully considering ideas that we already have some knowledge on and assessing them in a new light. Reflection in a psychological sense can also be seen in the dual-system theory with regards to thinking, the intuitive form and the more reflective form.

What type of agent are we building?

In our work, we are aiming to build a conversational agent that is to be used in reflection for deliberative processes. This is interesting as not only is reflection part of deliberation as a process, but deliberation is also part of reflection. In a deliberative process, reflection is necessary to consider the positions of the various stakeholders around the table, but deliberation is also vital to reflection to take a step back and consider one’s thoughts in a more slow and concentrated manner. Deliberation can be defined as “communication that induces reflection on preferences, values, and interests in a non-coercive fashion”. So, we are aiming to build a conversational agent that can help several various stakeholders involved deliberate in a reflective capacity. Fostering valuable reflection in deliberative processes in the public safety domain.

Related content
Blogpost for the AI-MAPS project by Nanou van Iersel
Helix engagement with surveillance AI
Blogpost for the AI-MAPS project by Sylvia I. Bergh and Kanan Dhru
THUAS AI-Fest
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes