Inspiring the next generation: Exploring robotics, AI, and ethics with high school students

At the Centre for Technomoral Futures, we are committed to engaging the public in meaningful conversations about the ethical implications of emerging technologies. Recently, one of our Senior Research Affiliates, Professor Burkhard Schafer, led a session with a group of pupils at Balerno High School in Edinburgh to explore the intersection of robotics, AI, and ethics. Here he shares his thoughts and learnings from the experience. 


Could you tell us a bit about your research and how it connects with this topic? 

Our research spans a range of projects involving robotics and the law, covering everything from medical AI and autonomous vehicles to generative AI’s impact on creative industries, and the ethics of lethal autonomous weapons. Across all these areas, we see a mix of very high hopes and equally prominent fears about the future of AI. As Director of the Centre, Professor Shannon Vallor, suggests in her latest book, AI often serves as a "mirror" reflecting our deepest desires, anxieties, and aspirations — a concept that resonates through historical myths like Pandora’s box and the Golem of Prague. 

One of our key projects, Edu4Standards, is part of an EU-funded initiative aiming to demystify the role of standards in AI regulation. Collaborating with the University of Graz, we are developing educational materials not just for universities, but for students at every level, including high school. Standards are crucial in building rational trust in AI, yet they are often overlooked in education. By introducing these concepts early on, we hope to empower young people to critically engage with the technologies that will shape their futures. 

What inspired you to engage with high school students on this topic? 

Firstly, from a personal perspective, today’s high school students are tomorrow’s university students — potentially mine! The divide between STEM and non-STEM fields is still prevalent, with students often deciding early on that certain subjects are not for them. This limits their future choices and deprives fields like law of the scientific literacy needed to regulate emerging technologies effectively. I wanted to communicate some of the excitement that comes from working across disciplinary boundaries and the type of challenges that affect them in their daily lives, but that can only be tackled when scientists, lawyers and philosophers work together. 

The second reason relates directly to our research focus. Historically, the desire to protect children has driven many regulatory decisions, from internet safety to the earliest child labour laws. However, while children are often at the centre of these discussions, their voices are rarely heard. That is something we are trying to change through our research, in fact two of my PhD students are actively aiming to give children agency and voice when it comes to technology regulation through their research. 

How did you structure the session with the pupils, and what activities or discussions were most effective? 

I started by sharing my own journey, from high school to becoming a law academic with an interest in technology. I then introduced them to the topic of autonomous driving and its regulation. This is an area that will affect them in a variety of ways, i.e., is it safe to cross a road when a driverless car approaches, do cyclists need to adjust their behaviour when sharing the road with them? Can children be much more mobile and independent when AVs finally become ubiquitous (e.g. collecting you from a party without your parents having to stay up), and if so, who becomes responsible for all the things a parent would normally do in addition to driving, such as ensuring all seat belts are fastened?    

This led to discussions about the concept of “regulation by design,” emphasising the need for scientists, engineers, philosophers, and lawyers to work closely together. Only when we have a shared vision of a good society and a clear notion of what behaviour is lawful in such a society, can we build machines that are ethically aligned with us. 

We then divided into groups to explore ethical dilemmas involving medical and military robots. The students were highly engaged and produced thoughtful design parameters for these technologies, finally discussing if requirements can be resolved by engineers alone or if we need the law to prohibit or demand certain affordances. 

Were there any interesting questions or perspectives raised by the students during the session? 

The students were extremely well prepared, intellectually curious, and engaged, it was a real pleasure talking to them. One of the most thought-provoking questions was about whether an autonomous vehicle should ever break the law. For instance, if there is an injured passenger who urgently needs medical attention, should the car be allowed to run a red light? This opened a discussion about when — if ever — it is morally justifiable for machines to make decisions that override legal norms. It was clear that the students were grappling with the complexities of balancing ethical and legal considerations, reflecting their own journeys towards developing independent thinking and agency. 

What do you hope the students took away from the session? 

I hope they left with two key messages. First, that the future of technology is not inevitable — they have the power to shape it. Whether they pursue careers in law, engineering, or any other field, they can influence how technology is used and regulated. 

The second message is more personal. I once dreamed of becoming a wildlife documentary maker or a forest ranger, not a legal academic working with computer scientists. In fact, for a long time, I did everything in my power to avoid either law or technology.  I hope the students realise that they do not have to have everything figured out right now — they should follow what excites them and remain open to unexpected opportunities. 

Feedback from the classroom 

We also spoke to the teachers and students involved in the session to gather their thoughts: 


Teacher’s perspective: “As a Religious, Moral and Philosophical Studies (RMPS) department, we find that discussing AI and robotics helps students engage with ethical theory in a way that feels relevant to their everyday lives. These topics encourage the development of critical thinking skills across a variety of subjects.” 

Student feedback: “I found the session really interesting, especially when we got to discuss real-life scenarios.” “I didn’t realise how advanced robots are. I used to think AI was just about phones, but it’s in so many other areas.” “The interactive tasks were great — it made learning about these topics much more engaging.” 


By fostering these conversations, we aim to empower young people to think critically about technology and its role in shaping society. Engaging with them at this early stage not only broadens their horizons but also enriches our own research by bringing fresh perspectives into the discussion. 

If you are interested in inviting a member of the Centre for Technomoral Futures to your classroom to support the curriculum, please get in touch to discuss your needs in more detail.

About the contributor:

Professor Burkhard Schafer

Professor Burkhard Schafer (CTMF Senior Research Affiliate) is Professor of Computational Legal Theory, with a particular interest in the use of technology in the justice system, legal responses to technological developments, and the changing vision of the just society under the rule of law.

Blog, Education, EngagementCTMF Admin