
Our Research

Click below to explore our research themes
-
Algorithmic Transparency and Explainability
Algorithmic Justice, Power, Freedom and Equity
Bias and Discrimination in Machine Learning
Ethics of Algorithmic Decision-Making
Algorithmic Accountability and Responsibility
-
Ethics of Automation
Ethics of Artificial Agent and Robot Design
Al 'Smart' Tech and Environments
Ethics of Affective and Social Technologies
Ethics of Knowledge Augmentation
-
Dataveillance and Data Privacy
Data Justice and Data Violence
Ethics or Data Ownership, Governance and Stewardship
Ethical Data Science and Data Practice
-
Emerging Tech and the Human Image
Al, Automation and Human Wisdom
Emerging Tech and Human Autonomy
Al, Religion, Art and Meaning
-
Emerging Tech and Democratic Flourishing
Emerging Tech and Human Flourishing
Emerging Tech and Community Flourishing
Emerging Tech and Cultural Flourishing
Emerging Tech and Planetary Flourishing
Harry’s project investigates how moral expertise emerges through skilled engagement in relationships of reciprocal vulnerability and feedback, and how these mechanisms are disrupted in socio-technical systems that develop and deploy artificially intelligent systems.
Sasha’s research aims to explore how newly developed AI and machine learning (ML) tools may impact the epistemic climate that both healthcare users and professionals participate in. Her project aims to identify ways to create and implement measures to combat injustices without rejecting the benefits of ML in healthcare provision.
Meenakshi’s work aims to explore explore how Indian EdTech engineers incorporate ideas about teaching and learning into the development of AI education technologies for K-12 classrooms. She aims to understand how ML infrastructures interact with and influence engineering practices in AI EdTech development.
Martin’s research project aims to develop and analyse a particular visual practice of artistic inquiry characterised by adversarial interventions with generative AI applications. This project offers a new perspective on the aesthetic, epistemic, evidential and translational value of art and design work that interrogates the ethical and cultural implications of generative AI.
Elisa’s work explores how we can empathise with the lived experience of ageing populations, designing digital devices and services that respond to their hopes and fears. The aim is to develop an expanded empathy framework for intergenerational inclusive design and codesign.
Han-Ju’s PhD research focuses on the socio-ethical critique of technology adoption within Scottish social enterprises, stemming from her passion for investigating the intersection between the ethics of technology and alternative organisations.
Iñaki’s work investigates the role of dialogue design in shaping, discourse, normative content, and outcomes of public engagement with emerging technologies.
Yiping’s work focuses on the introduction of big data-driven technology — exploring the context, differences and process of the technology development, in addition to the performativity realized through stakeholders’ and partitioners’ engagement, discussion, cooperation and negotiation during the implementation process.
Charlotte's work will focus on AI ethics in creative spaces, such as the interdisciplinary discussions about computational creativity as a tool for enhancing AI ethics, generative models, and human-algorithm collaboration.
Andrew is interested in understanding the psychological and sociological effects technology has on moral reasoning and character and hopes to provide a framework to better understand these connections.
This project explores how AI can help "augment" human reasoning, using machine learning and natural language processing models.
This project incorporates a psychological perspective on AI ethics to inform the moral/ethical design of AI systems in the near future, paying special attention to human moral judgments and intuitions towards artificial agents.
This project explores the responsible usage of AI, in particular, learning to identify and mitigate bias and algorithmic (un)fairness. It looks to prevent the potential reinforcement and amplification of harmful existing human biases with applications to credit access and the financial industry.
This project explores the ethical and political implications of digitalisation and datafication in higher education. In particular, this research investigates the changing experiences and subjectivities of students in contemporary UK universities amid the growing importance of digital technologies, data and platforms.
This research develops a comprehensive responsibility framework to enable stakeholders involved in the design, development, and deployment of decision-making algorithms and autonomous systems to effectively govern and take responsibility for the outcomes of those systems in fields such as health, robotics, and finance
This project explores what the philosophy of science and in particular, the history and philosophy of measurement, can contribute to our understanding of the social and ethical implications of machine learning.
This research investigates models of collective data governance in agricultural ecosystems, evaluating them from a lens of power and inclusion, and their broader implications for responses to the climate crisis.
This project aims to synthesise philosophical bioethics and public deliberative processes, to arrive at recommendations for the ethical use of AI in healthcare resource allocation.
PhD Research
Innovative PhD studentships in the Ethics of Data and AI, supervised by multi-disciplinary teams across the University:
Faculty Research
Zeerak Talat’s research centres on if, and how, machine learning and AI technologies can be used towards fair and equitable futures to answer how these should look, if we must live with them in our societies.
Professor Vallor’s research addresses the complex and rapidly changing impact of new technologies on human moral and intellectual capabilities and virtues of character.
John Zerilli is a philosopher with particular interests in cognitive science, artificial intelligence, and the law.
Cristina Richie is a Lecturer of Ethics of Technology. Her research is driven by a global vision of clean, just, and ethical health care and technology through the development of strategies and policies.
Postdoctoral Research
Fabio Tollon is a postdoctoral researcher as part of the BRAID (Bridging Responsible AI Divides) Programme at the University of Edinburgh.
A postdoctoral research collaboration between the Centre for Technomoral Futures and the School of Divinity at the University of Edinburgh.