How Machine Learning “Measures” and What We Can Learn from It
Project dates (estimated):
Sep 2021 – Aug 2025
Name of the PhD student
Alexander Martin Mussgnug
Supervisors:
Shannon Vallor – School of Philosophy, Psychology and Language Sciences (Philosophy)
Arno Onken – School of Informatics (Data Science for Life Sciences)
Sabina Leonelli – University of Exeter, School of Sociology, Philosophy and Anthropology
Project aims:
This project explores what the philosophy of science and in particular, the history and philosophy of measurement, can contribute to our understanding of the social and ethical implications of machine learning. I question critically the notion of prediction in machine learning and what would change if we instead interpret some machine learning applications as measurements.
Disciplines and subfields engaged:
AI Ethics
Philosophy of Science
Supervised Machine Learning
Research Themes:
Ethics of Algorithms
Algorithmic Transparency and Explainability
Ethics of Human-Machine Interactions
Ethics of Knowledge Augmentation
Ethics and Politics of Data
Ethics of Data Science and Data Practice
Emerging Technology and Human Identity
AI, Automation and Human Wisdom
Related outputs:
Presented on the relationship between AI ethics and the philosophy of science on the Mobilising Technomoral Knowledge panel at the Society for Philosophy & Technology Conference 2023 in Toyko.
Presented on the use of operational definitions in machine learning at the British Society for the Philosophy of Science Annual Meeting 2023 and at the Workshop “Measuring the Human: New developments in the epistemology of measurement in the human sciences”
Delivered an invited talk on the relationship between AI and conceptual freedom at the workshop AI and the Christian Churches, 2023.
The predictive reframing of machine learning applications: good predictions and bad measurements, Alexander Martin Mussgnug, European Journal for Philosophy and Science 12(3), 2022 🔗