Artificial intelligence may be borderless, but ethics are not. That tension will take center stage in Doha this fall when Hamad Bin Khalifa University (HBKU) convenes one of the year’s most ambitious gatherings on AI ethics.
From September 28–29, 2025, the Qatar National Convention Center (QNCC) will host AI Ethics: The Convergence of Technology and Diverse Moral Traditions, a two-day conference aiming to tackle a question that Silicon Valley often sidesteps: how should globally diverse values and traditions shape the rules governing AI?
Why It Matters
AI is being deployed at a faster pace than most governments can regulate. Whether in healthcare, finance, security, or education, algorithmic decision-making is now baked into daily life. But with cultural norms and ethical frameworks varying worldwide, whose values guide the code?
That’s where HBKU is stepping in. The event promises to be a convergence point for academics, policymakers, tech executives, and ethicists to debate how global standards can evolve without flattening cultural nuance.
The stakes aren’t just philosophical. Without consensus, companies face institutional risk—from lawsuits to reputational damage—while communities risk harm if AI systems reflect only narrow worldviews.
Six Frontlines of Ethical AI
The conference agenda will span six thematic areas where AI is already colliding with real-world stakes:
- Healthcare: From diagnosis bias to patient privacy.
- Urban Design: Smart cities that balance innovation with inclusivity.
- Security: Surveillance vs. civil liberties.
- Education: Personalized learning without deepening inequities.
- Finance: Algorithmic trading, lending bias, and systemic risk.
- Future of Work: Automation’s social and economic impact.
Who’s Speaking
The roster is global and heavyweight, with representation from academia, industry, and public policy. Featured speakers include:
- Amr Awadallah (Vectara) – co-founder of Cloudera, now focused on retrieval-augmented AI.
- Mark Coeckelbergh (University of Vienna) – leading European voice on AI philosophy.
- Munther Dahleh (MIT) – expert on complex systems and policy.
- David Leslie (The Alan Turing Institute) – researcher in responsible AI governance.
- Jeroen van den Hoven (Delft Institute of Design for Values) – pioneer in ethics-by-design approaches.
- Nancy Jecker (University of Washington) and Yali Cong (Peking University) – bringing bioethics and global perspectives.
And that’s just a fraction of the speaker lineup, which also features industry leaders from Microsoft, Smart City Cluster, Smart Cities Council, and Northwestern University.
A Global Imperative
HBKU is framing the event as not just a conference, but a call to collective responsibility. The university emphasizes that shaping AI requires collaboration across disciplines, geographies, and traditions—a contrast to the predominantly Western-centric debates that dominate headlines.
The timing couldn’t be sharper:
- Regulatory momentum is building in the U.S. (AI Executive Order), EU (AI Act), and China (algorithmic governance rules).
- Global industry players like OpenAI, Google, and Anthropic are accelerating AI rollouts with little international alignment.
- Emerging economies are demanding a seat at the table, arguing that ethics must account for diverse social and cultural contexts.
For the Media
Credentialed journalists will have access to keynotes, expert panels, and one-on-one interview opportunities with speakers. HBKU has published a full agenda and speaker list [here].
The Takeaway
The Doha conference could help redefine how AI ethics are shaped on the world stage. Instead of a top-down regulatory race, it points toward a pluralist model of AI governance—one that respects cultural differences while still building global safeguards.
Whether the world listens is another matter. But with AI accelerating faster than any regulatory framework, forums like this may prove essential in ensuring the technology reflects not just what we can build—but what we should.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI