Back to All Events

Technomoral Conversations: What the Majority World Can Teach Us about AI

  • Edinburgh Futures Institute, Level 0 Event Space 1 Lauriston Place Edinburgh, Scotland, EH3 9EN United Kingdom (map)

About this event

Artificial Intelligence models ‘learn’ and reproduce biases as a result of their training data, which is largely drawn from websites based in the US and other Western countries and is heavily skewed towards English language sources. At the same time, the work of training AI models and making them ‘safer’ for human consumption is outsourced to precarious and under-supported workers in developing countries. Tech companies in Silicon Valley and Western governments such as the EU currently dominate the global conversation on AI. Yet there is much that the Majority World has to teach us about AI, and this perspective is too often marginalised in the discussion of what a future with AI ought to look like.

In this Technomoral Conversations panel, we will hear from leading voices from the Majority World on what they have learned from and about AI, and the issues and visions they would like to see taken up more broadly as society grapples with the social and ethical implications of these emerging technologies.

Please note this is a hybrid event.

Important notice: This event will be photographed/recorded, and images may be used for future marketing, promotional or archive purposes. If you would prefer not to be photographed, please let organisers know at the event.

Speaker biographies:

Professor Shannon Vallor (co-chair) holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence in the University of Edinburgh’s Department of Philosophy. She is Director of the Centre for Technomoral Futures in the Edinburgh Futures Institute, and co-Director of the UKRI BRAID (Bridging Responsible AI Divides) programme. Professor Vallor's research explores the ethical challenges and opportunities posed by new uses of data and AI, and how these technologies reshape human moral and intellectual character. She is a former AI Ethicist at Google, and advises numerous academic, government and industry bodies on the ethical design and use of AI. She is the author of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016), and The AI Mirror (Oxford University Press, 2024).


Srravya Chandhiramowuli (co-chair) is a PhD candidate in the University of Edinburgh’s Institute for Design Informatics and a PhD affiliate at the Centre for Technomoral Futures. Her research examines the supply chains of dataset production for training and fine-tuning AI models. Her work offers rich insights into the on-ground practices of data annotation for AI, bringing particular attention to systemic challenges and frictions in data and AI pipelines. Building on scholarship in Human Computer Interaction (HCI) and Science and Technology Studies (STS), Srravya’s research seeks to contribute towards just and equitable AI futures.


Tarcizio Silva is a researcher based in São Paulo and focused on promoting decolonial and afrodiasporic lenses to understand and influence internet, A.I. and emergent technologies governance. They are a Tech Policy Senior Fellow at Mozilla Foundation and PhD candidate at UFABC.


Kingsley Owadara (He/Him) is the founder and an AI Ethicist at the Pan-Africa Center for AI Ethics, a dedicated not-for-profit organization committed to fostering the development and deployment of AI in a manner that prioritizes human-centric values. At the heart of his role, he spearheads the initiative to craft and refine ethical frameworks, ensuring that artificial intelligence technologies are developed and deployed with a strong emphasis on human values, ethics, and inclusivity.


Tara Fischbach is the Public Policy Manager for Community Engagement and Advocacy for the Middle East at Meta. She has worked in public policy, development and media with a strong background in research. She has experience working with government agencies, international NGOs, and community level organizations in research, communications, and development projects.