Report on the Staff-Student Workshop on Generative AI and the University of Edinburgh

On 22 February 2024, the Edinburgh Futures Institute hosted a staff-student workshop on the implications of generative AI systems like ChatGPT for the University of Edinburgh, organised alongside the University’s AI & Data Ethics Advisory Board.

 

More than 60 participants, comprised of students and staff from across the University, came together to explore, experiment and deliberate on what generative AI might mean for the University. Participants came into the workshop with a wide range of levels of familiarity and experience with generative AI, from daily or weekly users to people who had only tried AI tools once or twice.

 

The morning session featured an introduction to generative AI and a demonstration of an interface being developed by EDINA, the University’s enterprise software division, to give students and staff access to ChatGPT and other generative AI systems. This was followed by 5-10 minute ‘lightning talks’ by staff and students from a range of Schools, including Engineering, Physics & Astronomy, Informatics, ECA, Literature, Languages and Cultures and Social and Political Science. The talks explored applications and experiments with generative AI in teaching and research, as well as wider reflections on people’s hopes and fears that highlighted important questions concerning learning, representation, labour, knowledge production and intellectual property, among other things. A recording of the lightning talks can be viewed here: https://edin.ac/4djXlKO

 

In the afternoon, Dr James Stewart (Science, Technology and Innovation Studies) facilitated a workshop which gave participants the chance to try out the language model interface being developed by EDINA. Participants experimented with the system and discussed how to mitigate risks to assessment posed by generative AI and how generative AI might be used to support students.

 

Key messages from the workshop:

There is lots of work related to generative AI going on in different parts of the university with seemingly little coordination. Colleagues shared thoughtful and creative uses of generative AI in different disciplines and contexts, but for the most part people working in these areas were not aware of other, similar projects going on in different schools. The workshop provided a much needed forum for people to come together to share and deliberate, and there was interest in more such opportunities.

 

Generative AI could be useful for helping students (and staff) navigate the university and find key information. Students experimented with creating chatbots that would help students find information on the university website and access advice and support. They felt that this could be helpful, but that it would be essential that the information provided by the chatbot was accurate and reliable.

 

Staff need support and resources to redesign teaching and assessment in the context of generative AI. Workshop participants explored issues related to assessment posed by generative AI, and agreed that while generative AI presented opportunities for enhancement, teachers need to be supported to reimagine and redesign their assessment for a world with generative AI. 

 

Staff and students need clear guidance on the appropriate use of generative AI for academic work. Participants felt that a more joined up approach to the governance of generative AI was needed, and that it would be helpful to have all relevant policies in one place. In the context of education, while participants felt that a one-size-fits-all approach would be inappropriate, it was suggested that there should be transparent generative AI policies at the course level.

 

Participants were surprised and concerned about plans to roll out the ELM chatbot interface to the entire university so quickly.  Participants raised significant concerns related to cost, privacy and data protection, the logging and monitoring of inputs to the system, the moderation and validity of outputs from the system, energy consumption, a lack of guardrails for potential misuse of the system and a lack of training for staff and students on the uses and limitations of generative AI systems. While colleagues agreed that the system could potentially benefit the university community, it was felt that a more robust and careful approach to governance could help mitigate the ethical, reputational and financial risks posed by the system.

About the contributor:

 

Joe Noteboom is a PhD Fellow at the Centre for Technomoral Futures. His research project, ‘The University of Data: Ethical and Social Futures of Data-Driven Education,’ explores the ethical and political implications of digitalisation and datafication in higher education.

Throughout his studies, he has worked as an intern on the University of Edinburgh’s AI and Data Ethics Advisory Board.

 
Reports, BlogCTMF AdminBlog