At AI-MAPS, we’re fortunate to be surrounded by highly active advisory and sounding board. With bi-weekly operational reflections alongside the Impact Coalition of Safety and Security, bi-annual AI-MAPS advisory board meetings, and annual supervision with the NWO, these interactions have become institutionalized multi-stakeholder exchanges that we deeply value. I often leave these gatherings inspired—armed with fresh insights on what matters most, how we communicate, and how we approach the challenges and opportunities ahead. This year was no exception.
A Growing Learning Community
In September, we held our third AI-MAPS Sounding and Advisory Board meeting, followed by our third NWO Supervisory Committee meeting in November. The growth of our “learning community” is both exciting and humbling. But with this growth comes the need to rethink traditional meeting formats.
Classic academic settings—while useful in certain spaces—don’t always fit the needs of such a diverse and dynamic group of stakeholders. That’s a challenge we’re ready to embrace as we innovate future formats for more effective collaboration and exchange.
What stood out this time was the praise and admiration for our incredible team of four PhD researchers. One board member captured it beautifully, describing their contributions as powered by “mature enthusiasm.” It fills me with pride and gratitude to see how these bright minds embody the diversity and energy of AI-MAPS. Their work helps us live up to our ELSA (Ethical, Legal, and Societal Aspects) methodology—where rich debates, diverse perspectives, and critical friendships push us forward.
Reflecting on Our Assumptions: Lessons Learned
A highlight of our recent discussions was revisiting the assumptions and change theory we crafted at the project’s outset. Naturally, we’ve learned along the way, and with new insights comes the need to revise some of those early beliefs.
- Trust and Societal Tensions
Initially, we focused on building trust in AI technology itself. However, our journey has revealed that concerns about AI mirror broader societal tensions—like trust in governmental and private institutions. AI doesn’t exist in isolation; it reflects and sometimes amplifies these challenges. - Beyond Policy Checklists
At the start, we anticipated the need for comprehensive policy checklists around AI implementation. But the landscape has evolved significantly. Today, robust guidelines exist at regional, national, and European levels. What stakeholders truly need now is contextualized guidance—practical tools for applying these policies to specific, real-world cases. - More-Than-Human AI
One of the most profound shifts has been expanding our focus from “human-centered AI” to a more-than-humanperspective. This learning first emerged from our work in the Living Lab Scheveningen, where we realized we had overlooked nature’s role in public safety dynamics.
A planetary perspective on AI ethics is no longer optional—it’s essential. At a basic level, this means critically reflecting on the proportionality of energy consumption in AI experimentation. Conceptually, it demands we consider inter-species justice and shared needs within urban ecosystems. AI’s ethical evaluation cannot remain anthropocentric if we truly want to align with our ELSA approach.
Looking Ahead
These lessons—and the thoughtful feedback from our boards—are driving us toward more impactful, inclusive, and ethically sound outcomes. Our learning community has grown stronger, and so has our collective understanding of what AI means for society, safety, and beyond.
We’re ready to keep refining, rethinking, and reimagining AI’s role in a complex world—always guided by the critical friendships and fresh perspectives that our stakeholders so generously share.
Here’s to embracing growth, diversity, and a vision that extends far beyond technology itself.
- Related content
- Related links
- Overview blogposts | AI Maps