AI and Society Seminar Series

These seminars consider AI's social impacts in a range of areas, and discuss how AI can best be overseen to maximise positive impacts and minimise negative ones.

They are open to anyone who is interested in attending. We don't presuppose any technical background.. if we present AI systems, we do it 'from scratch', in ways that should be accessible to everyone.

Discussing AI's social impacts is a very interdisciplinary task, and our speakers and participants come from many different backgrounds, in academia, government, industry and NGOs.

The series organiser is Ali Knott (ali.knott@vuw.ac.nz): please email Ali if you'd like to be added to the mailing list.

Details of previous years' seminars can be found here: 2023, 2024

Seminars are at 4:00-5:30pm in Trimester 1, and at 4:30-5:30pm in Trimester 2 (unless otherwise specified).

Trimester 1:

Date Venue Speaker Title/topic  
28 Feb Rutherford House, RH1209/1210 Ali Knott

What happened at the Paris AI Summit

I attended the Paris AI Action Summit earlier this month (the followup to last year's Bletchley Park summit), along with some associated events: the first conference of the International Association for Safe and Ethical AI, and the AI, Science and Society conference. In this seminar I'll report back on those events.

 
7 March

Rutherford House, RHMZ03

*Note different room!*

Ali Knott

A summary of what’s in the 2025 International AI Safety Report

In this seminar I'll summarise the main themes in the International AI Safety Report, which was commissioned at the Bletchley Park summit, and presented at the Paris summit. This report was basically ignored in the leaders' communiqué, which many see as the summit's main failing.. but we won't ignore it in our seminars!

Video
14 March

Rutherford House, RHMZ03

Note different room!

Okan Tan (VUW Policy Adviser), Stella McIntosh (VUW Academic Integrity), Robert Stratford (VUW Academic office)

Te Herenga Waka's policy and guidance on Generative AI

Okan will provide an overview of VUW's new Generative Artificial Intelligence Policy, highlighting its purpose, key provisions, and implications for staff and students. He will also delve into the development process, including the complexities encountered, and explain in general terms the importance of following the University’s standard policy process in overcoming challenges collectively as a community.

Stella and Robert will outline the policy and guideline journey of AI in learning and teaching at Te Herenga Waka. They will especially focus on how the University has attempted to critically embrace Generative AI for learning and teaching, from a limited resource base. Starting from the introduction of ChatGPT in February/March 2023, this discussion will then talk through the challenges in establishing a University approach to Learning and Teaching with Artificial Intelligence, initially through staff and student guidelines. This discussion will include the ongoing work being done to develop guidelines for the use of Gen AI in for Research students, and the next phase of the University's work, which will include more specific guidance on Assessment in the context of Gen AI.

 
21 March Rutherford House, RH1209/1210 Rebekah Bowling (Kāi Tahu, Kāti Māmoe, Waitaha, Pākehā; PhD candidate and Assistant Lecturer in Criminology at VUW).

New Tech, Old Tactics: Facial Recognition and the Policing of Māori

Facial recognition technology (FRT) is increasingly utilised within New Zealand's justice realm despite international trends where its application disproportionately impacts Indigenous and racialised communities. While being marketed as ‘objective’ and ‘race-neutral’, FRT often struggles with accuracy for darker skin tones and Indigenous features like facial moko/tattoos. These challenges raise important questions about Māori Data Sovereignty, tikanga, tapu, and the ethical implications of using Māori faces in global databases without consent, particularly amidst limited regulation and consultation with Māori communities.

This kōrero draws on my current PhD research, exploring how FRT intersects with historical practices of surveillance and control under settler colonialism. By analysing its use in policing, government services, and retail settings, I highlight the importance of transparency, accountability, and community engagement in the deployment of such technologies - especially for Māori here in Aotearoa. This discussion invites reflection on how we can ensure FRT aligns with equitable societal values and consider whether some technologies might perpetuate inequities rather than resolve them.

 
28 March Rutherford House, RH1209/1210 Andrew Chen (Chief Advisor for Technology Assurance, NZ Police)

Acceptable Use of Generative AI at NZ Police

Andrew will present on the work of the Technology Assurance team, and the new generative AI policy developed for NZ Police. The policy seeks to establish bounds and controls for using genAI, while allowing room for staff to realise the benefits of these tools. The policy also features a risk matrix applied to genAI that will help with evaluating tools and use cases.

 
4 April Rutherford House, RH1209/1210 Chris McGavin (visiting scholar at Engineering and Computer Science, VUW) Contextualising AI Harms Through a Shared Humanity

In this seminar Chris will explore the topic of AI harm through the use of critical theory, legal theory and analogy. The aim of the seminar is to utilise a multi-disciplinary approach to contextualise AI harm, and to illustrate that though a number are in fact new and novel, many are topics which have been grappled with in the humanities and legal fields for years.

 
11 April Rutherford House, RH1209/1210 Simon Wright (Chair of Trust Democracy)

Pol.is and the Quest for Public Deliberation at Scale

Democracies worldwide are at a crossroads. As governments grapple with challenges like climate change, poverty, housing, and equity, traditional democratic processes are proving incapable of meeting these challenges. The need for innovations that enable meaningful public participation has never been greater.

Pol.is is one such innovation. This AI-powered tool facilitates large-scale public discussions that are safe, inclusive and insightful. By analysing real-time input from participants, Pol.is identifies areas of consensus and divergence, and encourages participant reflection and the sharing of ideas. The platform gained prominence through its use by Audrey Tang in the vTaiwan policy process and has since been applied globally in diverse contexts.

Since 2016, I have used Pol.is to facilitate discussions on complex issues, including obesity, taxation, affordable housing, biodiversity, transport and GP burnout. In this seminar, I will share and reflect on my experiences with Pol.is and explore how AI might be used to help facilitate constructive public dialogue and deliberation at scale.

 
18 April   Mid-trimester break No seminar!  
25 April   Mid-trimester break No seminar!  

Weds April 30 4pm

Note unusual day!

Laby Building, LBLT118

Note this talk is at the Kelburn campus!

Sir Peter Gluckman ONZ KNZM FRS FMedSci FRSNZ (President, International Science Council; Chair, Science System Advisory Group; Koi Tū: Centre for Informed Futures, Auckland)

Whither artificial intelligence: science, government, industry and society?

Koi Tū, and its predecessors have long been engaged in considering the impacts of digital technologies on society. In our work for the OECD in the going digital project 2015-16, we took the lead on exploring the impact on institutions, of self, social and civil life. We tried to apply that work in the issues that surrounded the initial steps government took towards big data with the IDI, but the essential need for an independent oversight mechanism, which was absent remains, and the need has escalated. Through my roles at the International Science Council, Hema Sridhar and I developed a framework to assist policy makers in dealing with rapidly emerging technologies such as AI – a framework which has been part of discussions from Geneva to New York.
It was during the Science System Advisory Group’s deliberations that we have come to focus on several questions which will be the primary focus on this presentation. We start with the observation that in advanced technologies we will never be fully sovereign, and strategic partnerships will be necessary both within and beyond our shores. First science itself and education are being changed massively by AI and we are only starting on the journey. Secondly the government itself will be a large and growing user of AI. The use of AI on policy development is rapidly appearing overseas, but the systems, oversight and training needs a whole of government approach. There is much that can be achieved with big data in policy making and in evidence review that could make a large difference to the efficiency and effectiveness of government. It goes well beyond simply services provision. Thirdly, in the defense and security space including the financial sector the potential of AI and quantum to change the whole basis of these sectors is real and we are poorly prepared relative to other countries. Fourthly, the potential of advanced technologies such as AI to enhance current industries and to foster new segments is high. But our national investment has been poor and not and strategically coordinated. In sum much of our activity is a set of individual activities and the need to find ways to bring capacities and skills together without stifling competition and innovation is at the heart of the Science System Advisory Group’s two reports and the recommendations that we will make as the process concludes. Exploiting AI cannot be seen differently, at least in principle from any other technological research and exploitation path. The role of social sciences in ensuring its appropriate use is key.

 
9 May

Rutherford House, RH105

Note different room!

Denny Kudrna (School of Government, VUW) AI-supported academic writing: insights from in-class experiments

Essay assignments, long-standing pillars of social science education, were fundamentally undermined by generative AI. While there is no definitive way to redesign essay assignments, ongoing experiments worldwide provide clues about what to do or not. This presentation reviews the emerging best practices and reports insights from courses at VUW and elsewhere.

 
16 May Rutherford House, RH1209/1210 Terry Flew (Professor of Digital Communication and Culture, Media and Communications, University of Sydney; Australian Research Council Laureate Fellow)

AI Across the Pond 1:

AI and Communications: New Machines and New Concepts

In this presentation I will consider ways in which the rapidly growing use of Artificial Intelligence (AI) by consumers as well as industry is transforming how we think about communications as both a social practice and as an academic field. I will give particular attention to three issues: the changing status of machines as communications actors and not simply platforms for human communication; the implications for trust from a growing reliance upon automated information and decision-making systems; and how global communications as a field of study may be challenged by the changing relationship between data and geopolitics.

 
23 May Rutherford House, RH1209/1210 Lorraine Finlay (Australian Human Rights Commissioner), Anne Hollonds (Australian Children's Commissioner)

AI Across the Pond: Australia's forthcoming ban on social media use for under 16s

 
30 May Rutherford House, RH1209/1210 TBA