Watch the full recording here: https://youtu.be/-SEOeEfxXJc?si=PELnP_d3Q0bU_j7v
1. Introduction and Purpose
Artificial intelligence is increasingly embedded in business models, public administration, financial systems, and digital platforms. While adoption is accelerating, governance, accountability, and transparency mechanisms are struggling to keep pace. In response, EWMD hosted a panel discussion on
AI governance and algorithmic bias, bringing together interdisciplinary expertise to examine risks, responsibilities, and pathways for systemic change.
The discussion was moderated by
Nadine Nembach, former EWMD International Co-President, Project Leader of the
Brussels Declaration, and CEO of
N² Consulting & Training. With extensive experience as an international trainer, coach, and facilitator, Nadine framed the conversation through the lens of inclusive leadership, policy engagement, and societal impact.
2. Event Objectives
The event aimed to move beyond awareness-raising toward concrete empowerment. Its objectives were to enable participants to:
- Understand their rights under the EU AI Act and emerging US regulations
- Strategically document algorithmic harm for accountability and potential legal action
- Develop tactical defenses while advocating for systemic regulatory change
- Understand the technical mechanisms behind algorithmic bias across platforms
- Quantify the true business impact of AI bias, moving beyond engagement metrics to revenue loss
3. Speaker Profiles and Perspectives
Ruth M. Dasarry Sierra is the Founder of
Genesis Coaching Company, holds a PhD in Engineering, and works as an AI Execution Architect. She brings over 13 years of experience in engineering systems and manufacturing, applying deep technical expertise to business strategy.
As a business leader and coach, Ruth supports women entrepreneurs in building and scaling their businesses through systems and frameworks that responsibly integrate AI. She is particularly known for her
data-driven documentation of platform discrimination, translating complex AI governance failures into actionable insights. Her work focuses on
algorithmic transparency, regulatory compliance, and revenue protection for women-led businesses operating within biased digital infrastructures.
Johannes Gottschalk is a PhD researcher at the
Department of Islamic Studies (DIRS) at Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU). With a background in linguistics and IT, his research explores how artificial intelligence shapes
human fear, meaning, and responsibility, especially in ethical and religious contexts. His work critically examines how AI systems can — or cannot — remain accountable to human values, morality, and societal norms.
4. AI Governance: From Principles to Enforcement
Ruth emphasized that
AI governance cannot remain at the level of ethical declarations. Transparency, accountability, fairness, and human oversight must be operationalized and enforced throughout the entire lifecycle of AI systems.
She illustrated this through case studies:
- A biased worker-ranking algorithm in France that penalized workers without contextual understanding
- A UK visa application algorithm that discriminated based on nationality
- A Scandinavian financial compliance failure, where AI systems failed to detect large-scale money laundering
These cases demonstrated how algorithmic systems, when poorly governed, institutionalize discrimination while obscuring responsibility.
5. Transparency, Power, and Platform Accountability
Johannes raised critical questions about the feasibility of transparency in proprietary systems. Using platforms such as LinkedIn as an example, he highlighted how
opaque algorithms concentrate power while denying users meaningful insight or recourse.
The panel discussed the role of
open-source technologies and regulatory enforcement as potential counterbalances, emphasizing that transparency is not about revealing proprietary code, but about making
governance logic, accountability mechanisms, and impact visible.
6. Algorithmic Bias and Lived Experience
A Mentimeter poll revealed that a significant number of participants suspected algorithmic suppression of their content across platforms. When platforms reported “no issues,” participants expressed feelings of
frustration, anger, distrust, exclusion, and powerlessness.
This emotional response highlighted that algorithmic bias is not only technical, but also
psychological, economic, and societal.
7. Economic Impact and Tactical Responses
Ruth quantified the business impact of algorithmic bias, estimating
monthly revenue losses of €10,000–15,000 for affected entrepreneurs and businesses.
She recommended:
- Systematic documentation of platform performance
- A/B testing of content formats and language
- Reducing dependency on single platforms
These strategies allow users to regain agency within opaque systems while pushing for broader reform.
8. AI Literacy and Readiness
Ruth outlined three levels of AI readiness:
- Foundational literacy – understanding what AI is and how it operates
- Technical integration – embedding AI responsibly into workflows
- Strategic governance – aligning AI use with values, regulation, and long-term goals
Johannes reinforced the risks of uncritical reliance on large language models, particularly their limitations in reasoning and ethical judgment.
9. Conclusion and Call to Action
The event concluded with a strong call to action:
AI governance is a shared responsibility. Users, businesses, policymakers, and civil society must actively shape systems that are fair, transparent, and accountable.
EWMD reaffirmed its commitment to fostering informed dialogue, empowering women leaders, and influencing European-level governance to ensure that AI serves society — not the other way around.