The role of machine learning generated health categories in epistemically (un)just healthcare practices
Project dates (estimated):
September 2024 - August 2027
Name of the PhD student:
Sasha Lee Smit
Supervisors:
Emily Postan – Edinburgh Law School
Gill Haddow – School of Social and Political Sciences
Project aims:
This project aims to explore how newly developed AI and machine learning (ML) tools may impact the epistemic climate that both healthcare users and professionals participate in. Through an analysis and assessment of the theoretical concept of epistemic injustice, Sasha seeks to identify how epistemic injustice may be linked to the implementation of ML generated health categories and how current conceptualisations of epistemic justice and injustice may need to be expanded upon for the purpose of understanding this link. Further, this project aims to identify ways to create and implement measures to combat these injustices without rejecting the benefits of ML in healthcare provision.
Disciplines and subfields engaged:
Philosophical bioethics
Social and epistemic justice
Human-Computer Interaction
AI Ethics
Research Themes:
Emerging Technology and Human Identity
Al, Automation and Human Wisdom
Emerging Tech and Human Autonomy
Emerging Technology, Health, and Flourishing
Emerging Tech and Human Flourishing
Ethics and Politics of Data
Data Justice and Data Violence