Value alignment as a Sustainable Co-Learning Process: To Be Supported by a Conversational Agent

Crowd on a street

By Mark Neerincx, 28 June 2023

AI-MAPS is studying the implications of AI development and deployment in the public safety domain, addressing the ethical, legal and societal aspects (ELSA) in a comprehensive way. Humans are adaptive learning beings who attune their decisions and actions to their situated needs and goals continuously, both implicitly and explicitly. The introduction of AI-technology with its own capabilities to adapt, learn and act, showing emergent behaviors, poses new challenges to predict the ethical, legal and societal consequences. AI-applications will not operate in isolation, but will be used by humans for specific objectives in certain environments (society, community, team, nature, Internet, …). The components of this socio-technical system have their own self-regulatory processes and are adaptive towards each other (i.e., the human, the AI and the environment). Further, realize that it actually is the ensemble of different types of AI-algorithms and -models (with their specific grounding) that (may) act in an organization and our society (with internal and external interdependencies). The specific interdependencies between such components and the mutual adaptive behaviors of the AI-applications, humans and environment, can bring forward specific risks such as (implicit) discrimination, opaque or uncontrollable opinion and behavior influence, or ambiguities about the responsibilities for decisions and information processing. An ongoing critical attitude, systematic reflections, experience-based learning and modification opportunities are required to establish a socio-technical system that acts and evolves in line with the human values at stake.

In different use cases, we apply this evolutionary approach, in co-creation with all stakeholders. As a specific example, a conversational agent will be developed that supports the democratic deliberation for value identification and weighting. The conversational agent will help the stakeholders to map out their values, their weights and relate them to the values of other stakeholders. Note that the agent is an AI-application in itself, so we also take ELSA aspects into account during its development, such as privacy and achievement: What do people want or refuse to share in a workshop setting, and what does the deliberation bring you? The collaboration between the different disciplines and stakeholders is a fundamental and unique approach. For example, the knowledge from practice helps to make the technology (such as the conversational agent) inclusive: if we want to involve security officers and citizens, we need to know how power relations work so that everyone can have an equal say in consultations or deliberations. And we need to know how groups communicate so that the conversational agent's communication will fit in with that. To develop this transdisciplinary knowledge, researchers from all “AI-and-ELSA disciplines” are involved: legists, sociologists, philosophers and computer scientists. We need all those disciplines to establish an AI-included socio-technical system, in a responsible way.

Related content
Stakeholder meeting – ELSA at work, by Gabriele Jacobs.
Stakeholder meeting – ELSA at work
ELSA 2.0
ELSA 2.0
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes