The Council of Europe has invited Alberto Quintavalla, co-Director of the Erasmus Center of Law and Digitalization and Assistant Professor of Innovation of Public Law, for an exchange of views on the risks artificial intelligence poses to human rights, on the occasion of the inaugural meeting of the Drafting Group on Human Rights and Artificial Intelligence (CDDH-IA). The CDDH of the Council of Europe is working on a Handbook on Human Rights and Artificial Intelligence. This Handbook aims to provide further guidance to government officials and policymakers of Council of Europe Member States in aligning the development and deployment of AI systems with human rights. The work is being carried out by CDDH-AI, a specialised working group focused on AI and human rights within the Council of Europe.
The Committee of Ministers of the Council of Europe has adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. This is the first international treaty addressing the ethical and legal aspects of AI, adopted in May 2024. However, this is a global treaty, and the regulatory provisions thereof are broadly framed. This pushed the Council of Europe to commission the drawing up of a Handbook on Human Rights and Artificial Intelligence. This Handbook would serve as a practical tool for applying human rights standards in the context of AI. In this way, it would serve as guidance to help government officials and policymakers of Council of Europe Member States in the application of existing legal frameworks. The working group responsible for the drawing up of the Handbook is expected to finalise its work by December 2025.
The most pressing human rights risks in AI
“Certain human rights take more center stage in contemporary discussions on the impact of AI technology on human rights”, Quintavalla explained. Among these, the right to privacy - referred to broadly in the European Convention on Human Rights as the right to respect for private and family life (Article 8) - stands out. Equally significant are protection from discrimination and the right to a fair trial. However, the influence of AI extends far beyond these ‘usual suspects’.
“AI can impact several human rights of different generations”, the researcher emphasized. This was shown in a book Quintavalla co-edited with Jeroen Temperman, Professor of International Law at Erasmus School of Law. The book featured interesting contributions from colleagues, too. He explained: “Temperman showed the impact that AI can have on religious freedom, Assistant Professor Enrique Santamaria Echeverria discussed the implications of AI on the right to health, and I outlined the relationship between AI and the right to a healthy environment.”
Which rights should carry more weight?
While some human rights receive greater attention in the context of AI, Quintavalla argued for a broader perspective: “AI impacts most, if not all, human rights.” He explained that certain rights, such as data protection, benefit from established legal frameworks like the Council of Europe’s Convention 108+, but this should not overshadow the complex interplay of human rights affected by AI.
“For instance, the use of remote biometric identification, usually associated with the right to privacy, can have manifold impacts on different human rights such as protection from non-discrimination, freedom of assembly, and security as well as the right to asylum”, he noted. Moreover, AI’s capacity to influence multiple rights simultaneously means these rights can sometimes compete with one another. “Policymakers should focus on finding a way to deal with such complexity”, Quintavalla explains.
The role of the Handbook
The researcher emphasized that the Handbook primarily targets public officials in Council of Europe member states who work at the intersection of AI and human rights. “These actors may use the Handbook as a practical tool for applying relevant guidance in the interpretation of existing legal frameworks”, he said.
However, the Handbook’s relevance extends beyond policymakers. “Lawyers and other legal professionals could also benefit from having a comprehensive reference for human rights and AI”, Quintavalla added. Other interested parties, such as human rights organizations and AI developers, might also find the Handbook relevant. Businesses, in particular, have a critical role to play. “Businesses are key actors in the development of AI and can play an essential role in ensuring that AI technologies respect human rights.” By offering additional guidance to the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law, the Handbook seeks to fill gaps where the Framework does not delve into specific human rights issues.
Quintavalla clarifies why giving the pace of AI developments, adaptability is essential for regulatory tools. “The Framework Convention is broadly phrased not only to ensure global outreach but also to provide sufficient flexibility for adapting to rapid developments in AI technology.” Similarly, the Handbook aims to strike a balance between offering concrete guidance and maintaining long-term relevance.
Balancing innovation and rights protection
The researcher acknowledged the challenging nature of balancing innovation with the protection of fundamental rights. “Finding a fair balance requires good knowledge of the technical components and a more comprehensive view of affected human rights - both positively and negatively.” He stressed that legal and ethical experts play an indispensable role in this process. “They can help create a regulatory framework in which AI technology can be developed and implemented responsibly while respecting human rights and values.”
Ultimately, decisions about this balance are societal ones. “These decisions will be shaped by the worldview that we as a society hold”, he concluded.
- Assistant professor
- More information
For more information about the book by Quintavalla and Temperman, click here.