AI and Society Seminar Series

These seminars consider AI's social impacts in a range of areas, and discuss how AI can best be overseen to maximise positive impacts and minimise negative ones.

They are open to anyone who is interested in attending. We don't presuppose any technical background.. if we present AI systems, we do it 'from scratch', in ways that should be accessible to everyone.

Discussing AI's social impacts is a very interdisciplinary task, and our speakers and participants come from many different backgrounds, in academia, government, industry and NGOs.

The series organiser is Ali Knott (ali.knott@vuw.ac.nz): please email Ali if you'd like to be added to the mailing list.

Details of previous years' seminars can be found here: 2023, 2024

Seminars are at 4:00-5:30pm.

Trimester 1 (Jump to Trimester 2)

Date Venue Speaker Title/topic  
28 Feb Rutherford House, RH1209/1210 Ali Knott

What happened at the Paris AI Summit

I attended the Paris AI Action Summit earlier this month (the followup to last year's Bletchley Park summit), along with some associated events: the first conference of the International Association for Safe and Ethical AI, and the AI, Science and Society conference. In this seminar I'll report back on those events.

 
7 March

Rutherford House, RHMZ03

*Note different room!*

Ali Knott

A summary of what’s in the 2025 International AI Safety Report

In this seminar I'll summarise the main themes in the International AI Safety Report, which was commissioned at the Bletchley Park summit, and presented at the Paris summit. This report was basically ignored in the leaders' communiqué, which many see as the summit's main failing.. but we won't ignore it in our seminars!

Video
14 March

Rutherford House, RHMZ03

Note different room!

Okan Tan (VUW Policy Adviser), Stella McIntosh (VUW Academic Integrity), Robert Stratford (VUW Academic office)

Te Herenga Waka's policy and guidance on Generative AI

Okan will provide an overview of VUW's new Generative Artificial Intelligence Policy, highlighting its purpose, key provisions, and implications for staff and students. He will also delve into the development process, including the complexities encountered, and explain in general terms the importance of following the University’s standard policy process in overcoming challenges collectively as a community.

Stella and Robert will outline the policy and guideline journey of AI in learning and teaching at Te Herenga Waka. They will especially focus on how the University has attempted to critically embrace Generative AI for learning and teaching, from a limited resource base. Starting from the introduction of ChatGPT in February/March 2023, this discussion will then talk through the challenges in establishing a University approach to Learning and Teaching with Artificial Intelligence, initially through staff and student guidelines. This discussion will include the ongoing work being done to develop guidelines for the use of Gen AI in for Research students, and the next phase of the University's work, which will include more specific guidance on Assessment in the context of Gen AI.

 
21 March Rutherford House, RH1209/1210 Rebekah Bowling (Kāi Tahu, Kāti Māmoe, Waitaha, Pākehā; PhD candidate and Assistant Lecturer in Criminology at VUW).

New Tech, Old Tactics: Facial Recognition and the Policing of Māori

Facial recognition technology (FRT) is increasingly utilised within New Zealand's justice realm despite international trends where its application disproportionately impacts Indigenous and racialised communities. While being marketed as ‘objective’ and ‘race-neutral’, FRT often struggles with accuracy for darker skin tones and Indigenous features like facial moko/tattoos. These challenges raise important questions about Māori Data Sovereignty, tikanga, tapu, and the ethical implications of using Māori faces in global databases without consent, particularly amidst limited regulation and consultation with Māori communities.

This kōrero draws on my current PhD research, exploring how FRT intersects with historical practices of surveillance and control under settler colonialism. By analysing its use in policing, government services, and retail settings, I highlight the importance of transparency, accountability, and community engagement in the deployment of such technologies - especially for Māori here in Aotearoa. This discussion invites reflection on how we can ensure FRT aligns with equitable societal values and consider whether some technologies might perpetuate inequities rather than resolve them.

 
28 March Rutherford House, RH1209/1210 Andrew Chen (Chief Advisor for Technology Assurance, NZ Police)

Acceptable Use of Generative AI at NZ Police

Andrew will present on the work of the Technology Assurance team, and the new generative AI policy developed for NZ Police. The policy seeks to establish bounds and controls for using genAI, while allowing room for staff to realise the benefits of these tools. The policy also features a risk matrix applied to genAI that will help with evaluating tools and use cases.

 
4 April Rutherford House, RH1209/1210 Chris McGavin (visiting scholar at Engineering and Computer Science, VUW) Contextualising AI Harms Through a Shared Humanity

In this seminar Chris will explore the topic of AI harm through the use of critical theory, legal theory and analogy. The aim of the seminar is to utilise a multi-disciplinary approach to contextualise AI harm, and to illustrate that though a number are in fact new and novel, many are topics which have been grappled with in the humanities and legal fields for years.

 
11 April Rutherford House, RH1209/1210 Simon Wright (Chair of Trust Democracy)

Pol.is and the Quest for Public Deliberation at Scale

Democracies worldwide are at a crossroads. As governments grapple with challenges like climate change, poverty, housing, and equity, traditional democratic processes are proving incapable of meeting these challenges. The need for innovations that enable meaningful public participation has never been greater.

Pol.is is one such innovation. This AI-powered tool facilitates large-scale public discussions that are safe, inclusive and insightful. By analysing real-time input from participants, Pol.is identifies areas of consensus and divergence, and encourages participant reflection and the sharing of ideas. The platform gained prominence through its use by Audrey Tang in the vTaiwan policy process and has since been applied globally in diverse contexts.

Since 2016, I have used Pol.is to facilitate discussions on complex issues, including obesity, taxation, affordable housing, biodiversity, transport and GP burnout. In this seminar, I will share and reflect on my experiences with Pol.is and explore how AI might be used to help facilitate constructive public dialogue and deliberation at scale.

Video (starts at 10:20!)
18 April   Mid-trimester break No seminar!  
25 April   Mid-trimester break No seminar!  

Weds April 30 4pm

Note unusual day!

Laby Building, LBLT118

Note this talk is at the Kelburn campus!

Sir Peter Gluckman ONZ KNZM FRS FMedSci FRSNZ (President, International Science Council; Chair, Science System Advisory Group; Koi Tū: Centre for Informed Futures, Auckland)

Whither artificial intelligence? Science, government, industry and society

Koi Tū, and its predecessors have long been engaged in considering the impacts of digital technologies on society. In our work for the OECD in the going digital project 2015-16, we took the lead on exploring the impact on institutions, of self, social and civil life. We tried to apply that work in the issues that surrounded the initial steps government took towards big data with the IDI, but the essential need for an independent oversight mechanism, which was absent remains, and the need has escalated. Through my roles at the International Science Council, Hema Sridhar and I developed a framework to assist policy makers in dealing with rapidly emerging technologies such as AI – a framework which has been part of discussions from Geneva to New York.
It was during the Science System Advisory Group’s deliberations that we have come to focus on several questions which will be the primary focus on this presentation. We start with the observation that in advanced technologies we will never be fully sovereign, and strategic partnerships will be necessary both within and beyond our shores. First science itself and education are being changed massively by AI and we are only starting on the journey. Secondly the government itself will be a large and growing user of AI. The use of AI on policy development is rapidly appearing overseas, but the systems, oversight and training needs a whole of government approach. There is much that can be achieved with big data in policy making and in evidence review that could make a large difference to the efficiency and effectiveness of government. It goes well beyond simply services provision. Thirdly, in the defense and security space including the financial sector the potential of AI and quantum to change the whole basis of these sectors is real and we are poorly prepared relative to other countries. Fourthly, the potential of advanced technologies such as AI to enhance current industries and to foster new segments is high. But our national investment has been poor and not and strategically coordinated. In sum much of our activity is a set of individual activities and the need to find ways to bring capacities and skills together without stifling competition and innovation is at the heart of the Science System Advisory Group’s two reports and the recommendations that we will make as the process concludes. Exploiting AI cannot be seen differently, at least in principle from any other technological research and exploitation path. The role of social sciences in ensuring its appropriate use is key.

Video
9 May

Rutherford House, RH105

Note different room!

Denny Kudrna (School of Government, VUW) AI-supported academic writing: insights from in-class experiments

Essay assignments, long-standing pillars of social science education, were fundamentally undermined by generative AI. While there is no definitive way to redesign essay assignments, ongoing experiments worldwide provide clues about what to do or not. This presentation reviews the emerging best practices and reports insights from courses at VUW and elsewhere.

 
16 May Rutherford House, RH1209/1210 Terry Flew (Professor of Digital Communication and Culture, Media and Communications, University of Sydney; Australian Research Council Laureate Fellow)

AI Across the Pond 1: AI and Communications: New Machines and New Concepts

In this presentation I will consider ways in which the rapidly growing use of Artificial Intelligence (AI) by consumers as well as industry is transforming how we think about communications as both a social practice and as an academic field. I will give particular attention to three issues: the changing status of machines as communications actors and not simply platforms for human communication; the implications for trust from a growing reliance upon automated information and decision-making systems; and how global communications as a field of study may be challenged by the changing relationship between data and geopolitics.

 
23 May Rutherford House, RH1209/1210 Lorraine Finlay (Australian Human Rights Commissioner)

AI Across the Pond 2: Australia's forthcoming ban on social media use for under 16s

Video

Trimester 2

Date Venue Speaker Title/topic  
11 July Rutherford House, RH105 *note different room!* Ali Knott (VUW Engineering and Computer Science)

AI and political conflict: Where are we? What can be done?

The world is particularly full of political conflict at the moment - both within countries (between supporters of different parties and ideologies), and between countries (over economic and strategic ascendancy). Of course, conflict is a way of life for us humans - but AI is newly involved in the conflicts that are taking centre stage at present. In this talk I’ll outline how AI is involved in current conflicts. I’ll also sketch a few tentative ideas about how AI, and AI regulation, can help to mitigate current conflicts.

Video
18 July Rutherford House, RH105 *note different room!* Ali Knott, Simon McCallum (VUW Engineering and Computer Science), Tom Barraclough (Brainbox)

New Zealand's AI Strategy: A first look

The NZ government has just released its Strategy for AI. In this seminar, we will give an overview of the strategy document, and the context of its creation. This will be an early opportunity to talk about the strategy, and compare to strategies developed in other countries.

Video
25 July Rutherford House, RH105 *note different room!* Karaitiana Taiuru (Taiuru & Associates, University of Canterbury)

Developments, cultural appropriation/taxation with Māori in AI and Data Governance: Associated risks, benefits and solutions

Te Ao Māori interactions with AI in 2025 spanning OIA's to all of government re Māori Data Sovereignty/ Governance seeing what works and doesn't work, business attitudes to Te Tiriti, culture and Māori staff in AI and Data, Iwi and Māori implementations and initiatives of AI and Data. This will address the current risks and opportunities for all of New Zealand.

 
1 August Rutherford House, RH1209/1210 Mark Bennett (Faculty of Law, VUW), Amanda Wolf (School of Government, VUW), John Randal (VUW School of Economics & Finance)

Dispelling the Illusion of Learning: Shifting the Culture for Professional Students in an AI Future

As generative AI systems increasingly match (and at times outperform) the best student work in common university assessment tasks such as analytical writing, problem-solving, and applied reasoning, educators face a fundamental question: what kinds of learning still matter, how should we teach for them, and how should we assess them?

This talk brings together brings together perspectives from the Faculty of Law and the Wellington School of Business and Government to explore how AI is transforming professional education. Drawing on recent experiments and first-hand teaching experience, we offer tentative analyses of the growing capability of AI to perform traditional academic and professional tasks and what this means for assessment, curriculum design - and more broadly the future of knowledge work and 'the university'.

We will consider to what degree students still need to demonstrate these capabilities to become experts who can use AI as a lever for higher-order reasoning, or whether AI's growing competence demands a sharper pivot: toward helping students identify, apply, and extend the kinds of judgment, contextual sensitivity, and ethical reasoning that remain distinctly human.

We will also consider the danger of the persistent illusion of learning - the assumption that task completion signals true understanding - and explore how AI reveals its limits.

 
8 August Rutherford House, RH1209/1210 Jiun Youn (VUW Psychology) How Generative AI Changed My Classes: A Course Coordinator's Log (2023 - 2025)

Since early 2023, I have been working to introduce the constructive use of generative AI in the classes I coordinate. This talk will cover my experiments with reference-based AI tool tutorials, chatbots, and large-scale oral assessments as ways to address the challenges posed by this technology. I will also discuss the changes I have been noticing in students' perspectives toward AI in teaching, and what we might be able to do to create a more meaningful experience for both students and academics while working with rather than ignoring this technology.
 
15 August Rutherford House, RH1209/1210 Ethan Rogacion, Aría Lal (VUWSA Exec)

The impact of AI on assessments and student learning at University

 
22 August Mid-trimester break - no seminar!      
29 August Mid-trimester break - no seminar!      
5 Sept Rutherford House, RH1209/1210 Xavier Marquez (VUW Political Science)

Aristotle, Natural Slavery, and Generative AI

In Book I of the Politics, Aristotle developed an infamous argument about "natural slavery", that is, the idea that there is a set of human beings who are "by nature" slaves, and who would thus be better off being owned by another person. These arguments characterize natural slaves in a variety of not entirely consistent ways, but they converge on the idea of the natural slave as an animate tool, with enough rationality to follow orders and work for another but not enough to rule themselves. While it is generally acknowledged that these arguments fail for human beings, the concept of the natural slave turns out to be a surprisingly interesting lens for understanding modern Generative AI agents since, unlike human beings, many Generative AI agents do fit the category of an animate, rational tool. Indeed, Aristotle himself argues that certain kinds of intelligent machines, if they existed, would be equivalent to natural slaves (Politics 1253b30–1254b35). In this paper I propose taking this analogy seriously, arguing that many forms of Generative AI can be understood as animate tools in the Aristotelian sense. I then trace some of the political and ethical implications of this view, drawing on a variety of theoretical traditions. In particular, I argue that if Generative AI agents are best understood as animate tools, and thus can be "owned", the distribution of such ownership is of the first importance; but so are the cultural implications of developing into a “slave society”, i.e., a society where much work is done by “animate tools”.

 
12 Sept Rutherford House, RH1209/1210 Hannah Betts (VUW, School of Science in Society)    
19 Sept Rutherford House, RH1209/1210 Daniel Guppy

How should we think about 'work' in the age of AI?

 
26 Sept Rutherford House, RH105 *note different room!*  

The Royal Society Te Apārangi's guidelines on Generative AI in Research: A first report, and some ongoing work

In this session, we will introduce some guidelines recently published by the Royal Society Te Apārangi, on Uses of Generative AI for research in Aotearoa New Zealand. We will also have a discussion about how more detailed guidelines could be provided, that cater for different research disciplines and different research tasks, while keeping up with the rapidly changing state of AI technology.

 
3 Oct Rutherford House, RH1209/1210 Johniel Bocacao (VUW School of Engineering and Computer Science; ACC)    
10 Oct Rutherford House, RH1209/1210 Bogdan State (VUW School of Information Management) Spam and Recommender Systems: A Long History