2022-2023 Year in Review

As the Centre’s third year comes to a close, we stand on the brink of an exciting new phase of maturity. Thanks to our remarkable early growth, in spite of the pandemic’s challenges, the Centre is about to meet all of its initial milestones.

In addition to our full cohort of 11 interdisciplinary PhD students, we have welcomed 25+ Centre affiliate researchers at the University, along with an incredibly diverse and talented group of over 25 MSc students who will join our long-awaited education programme in the 2023-2024 academic year. Plus, the Centre has now acquired UK-wide and international prominence with the launch of BRAID, a major 3-year research programme in Responsible AI, led by our Director, Professor Shannon Vallor, alongside Professor Ewa Luger.

Our PhD researchers are already publishing in peer-reviewed journals, presenting at international conferences, organising influential workshops in their fields, and as interns and policy fellows, helping UK government bodies to better understand AI and data and the implications of their AI regulation and policy choices. Their success has attracted even more talented academic staff to the Centre, such as our new Chancellors Fellow in the Law School, John Zerilli as well as our new Lecturer Cristina Richie and our new postdoctoral researcher on the BRAID programme, Fabio Tollon.

Looking ahead, the coming year will be one of intense visibility for the Centre. For example, on 15 September the BRAID programme launched in London at the BBC Radio Theatre, featuring 200 guests, a highprofile keynote from Dr Rumman Chowdhury (former head of Twitter’s META team), three distinguished panels with industry, policy and academic leaders in responsible AI, and a networking reception featuring ground-breaking artists working with AI as a medium and subject. We are also planning to host our first Flagship Lecture, as well as many other thought-provoking events.

Keep reading for a more in-depths look at the past year at the CTMF!

Professor Shannon Vallor, Director of the Centre for Technomoral Futures

In November 2022, Professor Vallor won a £3.5 million bid to co-direct the ‘Building Responsible AI Divides (BRAID)’ programme, alongside Professor Ewa Luger, co-Director of the Institute for Design Informatics. Further funding was awarded to the project in January to create a policy fellowship programme with the Department of Culture, Media and Sport, which continues, as well as a new AHRC fellowship programme to embed responsible AI experts in organisations. There are more details on this crucial work on the UK Research and Innovation blog.

As part of her work on UKRI Trustworthy Autonomous Systems (TAS) projects, Professor Vallor hosted a TAS Workshop on 10 July 2023, which resulted in the creation of the Edinburgh Declaration on Responsibility for Responsible AI, to start a conversation about what matters most when we talk about ‘Responsible AI’ and responsibility for autonomous systems. This was ahead of the First International Symposium on Trustworthy Autonomous Systems, where she was on the TAS Symposium Panel on Responsible AS, as well as the TAS All-Hands Panel on AI Policy and Regulation. Dr Vallor discussed her work on the TAS Responsibility Podcast earlier this year.

In addition to her work on these vital research grants, Professor Vallor was a Keynote speaker at Turing Fest on 29 June 2023 and was recently appointed by the Nuffield Foundation to the Oversight Board of the Ada Lovelace Institute. This Oversight Board leads the strategic development of the Ada Lovelace Institute—whose mission is to make data and AI work for people and society—and is responsible for its long term-sustainability.

Read more about Professor Vallor


Dr Atoosa Kasirzadeh, Chancellor’s Fellow

Dr Kasirzadeh was the Primary Investigator on a grant from the Alan Turing Institute on ‘New Perspectives on AI Futures', supporting a series of workshops held in Spring 2023. Three halfday, hybrid workshops were organized to surface new approaches to human flourishing with AI, bringing together expertise, insights and provocations from regions, sectors, and stakeholder groups often left out of high-level discussions of AI and our futures. The New Perspectives project was featured at the Edinburgh Futures Conversation event on AI Futures as well as at the Scottish AI Summit.

Earlier this year, Dr Kasirzadeh was named 2023 Department for Digital, Culture, Media and Sport (DCMS) BRAID Senior Policy Fellow on Social and Policy Implications of Generative AI.

Read more about Dr Kasirzadeh


Visiting Researchers

This past year, the CTMF hosted two visiting researchers:

  • Elena Walsh, Nov 2022-Feb 2023

Dr Walsh is a Lecturer in philosophy at the University of Woolongong, Australia. She works on emotion and emotional dispositions, drawing especially on dynamical systems theory, life history theory, and predictive processing models of mind.

  • Judith Simon, April 2023

Professor Simon is Full Professor for Ethics in Information Technologies at the Universität Hamburg. She is interested in ethical, epistemological and political questions arising in the context of digital technologies, in particular in regard to big data and artificial intelligence.

Read more about our Visiting Researchers

 

We had two new PhD researchers join us in the 2022-2023 academic year, Charlotte Bird and Andrew Zelny.


Charlotte Bird

Charlotte’s research is focused on ethical AI and computational creativity in human creative spaces. She received her MSc in Computer Science from Newcastle University, with a thesis focusing on neural networks. Prior to this, she completed a BA in Literature with a focus on morality and ethics in children's literature. Charlotte spent the year prior to the PhD working as a data analyst. Charlotte's work focuses on AI ethics in creative spaces, such as the interdisciplinary discussions about computational creativity as a tool for enhancing AI ethics, generative models, and human-algorithm collaboration.

Read more about Charlotte’s research

I am an art enthusiast and believe that generative AI should only enhance, not undermine, human art. I am practically interested in how users engage with systems: what do they do and how do they do it? I believe that AI ethics as a field has much to offer the arts in terms of practical solutions to problems, and I wish to make those connections.
— Charlotte

Andrew Zelny

Andrew’s academic interests focus on the intersection between ethics, psychology, and technology and how these fields come together to influence technological innovation and the development of moral character. His work at the Edinburgh Futures Institute is on the mediating role technology has on the Aristotelian virtue of phronesis (practical wisdom) and argues for the necessity of mindful design and use of emerging technologies towards the end of promoting that virtue. He is interested in understanding the psychological and sociological effects technology has on moral reasoning and character and hopes to provide a framework to better understand these connections.

Read more about Andrew’s research

I think the question of living wisely in a technologically dominated age is becoming an increasingly dire question that we need to start addressing. How can we live well with the technologies we are developing today, and are there technologies we should pursue or abandon in the name of both individual and societal flourishing? […] How can we develop enriching, meaningful technologies that aid our well-being rather than diminishing it?
— Andrew

Continuing PhD Researchers

Our PhD Researchers had many significant accomplishments throughout the past year. A few of the highlights include:

  • Bhargavi Ganesh received the Best Paper Award at the We Robot 2022 conference for the paper, ‘If It Ain’t Broke Don’t Fix It: Steamboat Accidents and their Lessons for AI Governance,’ co-authored by Stuart Anderson and Shannon Vallor. Read Bhargavi’s paper.

  • Claire Barale won the Best Paper Award at the International Conference on Artificial Intelligence and Law 2023 Doctoral Consortium for her paper, ‘Empowering Refugee Claimants and their Lawyers: Using Machine Learning to Examine Decision-Making in Refugee Law.’ Read Claire’s paper.

  • Jamie Webb co-organised the Postgraduate Bioethics Conference with Emma Nance. This conference took place at the University of Edinburgh in June 2023, and was focused on the future of bioethics. Read more about PGBC 2023.

  • Five of our PhD researchers presented on the Mobilising Technomoral Knowledge panel at the Society for Philosophy & Technology Conference 2023 in Toyko :

    • Aditya Singh and Bhargavi Ganesh co-presented on responsibility in data and AI supply chains focusing on the role of data brokers

    • Alex Mussgnug focused on the relationship between AI ethics and the philosophy of science

    • Andrew Zelny looked at the relationship between technologically mediated phronesis and technomoral change

    • Yuxin Liu presented on the impossibility of AI Moral Advisors to function as ideal observers

  • Charlotte Bird presented, ‘Evaluating Prompt Engineering as a Creative Practice,’ at ICCC 2023.

  • Joe Noteboom presented, ‘Exploring University Students’ Lived Experiences of Datafication, Data Literacies and the Potential for Collective Data Governance in UK Higher Education,’ at the Data Justice Conference at Cardiff University on 19 June 2023.

  • Savina Kim presented on, ‘The Double-Edged Sword of Big Data and Information Technology for the Disadvantaged: A Cautionary Tale from Open Banking,’ a paper she co-authored with Galina Andreeva and Michael Rovatsos, at the Credit Scoring and Credit Control Conference 2023. Read Savina’s paper.

To read more about our PhD Researchers' accomplishments, head to our PhD Research page, where you can learn more about each of their projects and research outputs.

This year, we opened applications for our new Masters programme in Data and Artificial Intelligence Ethics, and we are pleased to say that there has been substantial interest in the programme. In the coming academic year, we will be welcoming over 25 new MSc students!

Co-developed by academic staff Shannon Vallor, Atoosa Kasirzadeh, John Zerilli and James Garforth, the MSc in Data and AI Ethics programme meets the urgent demand for interdisciplinary skills and knowledge in the ethical design, use and governance of artificial intelligence and other data-intensive technologies.

Find out about the MSc in Data and AI Ethics

We collaborated on many successful events throughout 2022-2023, including an Edinburgh Futures Conversation on the Future of Artificial Intelligence, a workshop on AI for the Next Generation: Realising an Inclusive Vision for Scottish AI at the Scottish AI Summit, and two Technomoral Conversations.

Our signature event series, Technomoral Conversations, gathers academics and experts from diverse fields for a panel discussion of topical issues relating to the ethical impacts of artificial intelligence and other data-driven technologies on society. This year, we featured Technomoral Conversations on Sustainability and Artificial Intelligence as well as Technologically Mediated Intimacy.

In addition to these events, we worked to expand our external engagement in other ways. This included our Programme Manager, Dr Gina Helfrich taking part in a panel discussion looking at ChatGPT: Next Big Thing or Passing Trend?, as well as being invited by The Scotsman to contribute a column, ‘ChatGPT: Six reasons why we should all be wary of this kind of AI.’ This column featured on the paper’s front page on 15 February 2023.

Dr Helfrich also provided support to the Scottish Council for Voluntary Organisations (SCVO)’s request for input on their guidelines for the use of generative AI by voluntary organisations. SCVO has over 3,500 member organisations, with a sub-set of about 2,000 organisations looking for digital advice. Dr Helfrich also participated in SCVO’s July 2023 ‘DigiShift’ virtual discussion event on generative AI.

Catch up on our past events

Changes to the Team:

This year, we welcomed Dr John Zerilli and Jordan Watson to the CTMF team, while SJ Bennett and Christie Hewitt moved on to new positions.

 
 
 
 
 
 

Dr John Zerilli

Dr John Zerilli, Chancellor’s Fellow in Data, AI and the Rule of Law, joined the CTMF team earlier this year. Dr Zerilli is a legal scholar and philosopher with interests in cognitive science, artificial intelligence, and the law.

Read more about Dr Zerilli

Jordan Watson

Jordan Watson joined the team in April 2023 and provides administrative, communications and event support for the Centre for Technomoral Futures.

Read more about Jordan

SJ Bennett

SJ Bennett has moved to a new role as Research Associate for the Alan Turing Institute. During their time at the CTMF, SJ was a postdoctoral researcher in a research collaboration between the CTMF, the Data for Children Collaborative and UNICEF.

Read more about SJ Bennett

Christie Hewitt

Christie Hewitt shifted to her new role as Administrative Assistant for the Edinburgh Futures Institute in April 2023.


If you’d like to keep up to date with the Centre for Technomoral Futures, sign up to our mailing list!

 
 
 
ReportsCTMF Admin