Your filtered results are below:
Moral Judgments Towards Artificial Intelligence Systems
This project incorporates a psychological perspective on AI ethics to inform the moral/ethical design of AI systems in the near future, paying special attention to human moral judgments and intuitions towards artificial agents.
A Responsibility Framework for Governing Trustworthy Autonomous Systems
This research develops a comprehensive responsibility framework to enable stakeholders involved in the design, development, and deployment of decision-making algorithms and autonomous systems to effectively govern and take responsibility for the outcomes of those systems in fields such as health, robotics, and finance
How Machine Learning “Measures” and What We Can Learn from It
This project explores what the philosophy of science and in particular, the history and philosophy of measurement, can contribute to our understanding of the social and ethical implications of machine learning.