AI and Society Seminar Series 2026

These seminars consider AI's social impacts in a range of areas, and discuss how AI can best be overseen to maximise positive impacts and minimise negative ones.

They are open to anyone who is interested in attending. We don't presuppose any technical background.. if we present AI systems, we do it 'from scratch', in ways that should be accessible to everyone.

Discussing AI's social impacts is a very interdisciplinary task, and our speakers and participants come from many different backgrounds, in academia, government, industry and NGOs.

The series organiser is Ali Knott (ali.knott@vuw.ac.nz): please email Ali if you'd like to be added to the mailing list.

Details of previous years' seminars can be found here: 2023 (22 seminars), 2024 (24 seminars), 2025 (23 seminars)

Seminars are at 4:00-5:30pm.

Trimester 1 (Jump to Trimester 2)

Date Venue Speaker Title/topic  
27 Feb Rutherford House, RH1209/1210 Keoni Mahelona (Te Hiku Media)

Join the AI Rebellion ✊🏽

At the conclusion of Karen Hao's Empire of AI, an account on "Open" AI, we are presented with a new hope for the future of AI. That hope is ignited by Te Hiku Media, a small charitable Māori organization based in Kaitāia. This talk will demonstrate how Te Hiku became world renowned for its approach on trustworthy and ethical AI, and it will show how we're beating trillion dollar corporations at building efficient, accurate models for Aotearoa. If we truly want AI to help Aotearoa, we need to rebel against tech imperialism.

 
6 March Rutherford House, RH1209/1210 Panel: Andrew Ruthven (Catalyst), Tom Barraclough (Brainbox), Ali Knott (VUW)

Can New Zealand develop its own 'sovereign large language models'? (Ali's slides, Tom's slides)

The topic of ’sovereign AI’ is becoming prevalent in discussions about tech governance and international relations. The basic idea is that countries can reduce their dependence on AI products from Silicon Valley, by developing their own alternatives, owned and operated locally, and also governed locally. In this talk, we’ll consider whether Aotearoa could realistically pursue this path.
Last week, Keoni Mahelona presented Te Hiku Media’s distinctive approach to sovereignty in language technologies serving Northland Iwi. This week, we will consider a range of other factors relevant to New Zealand. Ali will outline some of the challenges and opportunities that New Zealand confronts when considering building localised LLMs of this kind. Andrew will describe experiences deploying and fine-tuning open-weights large language model (LLM) on local computing infrastructure at Catalyst Cloud. Tom will discuss concepts of sovereignty as they apply to different elements of the ‘AI stack’, and consider how NZ-localised LLMs could be governed.

Video
13 March Rutherford House, RH1209/1210 Laura McClure MP

Protecting People in the Age of Deepfakes

AI has evolved rapidly over the past 10 years, but our laws have struggled to keep pace. The rise of deepfake technology has enabled the creation of non-consensual intimate images which cause serious harm to victims. This talk will outline the growing risks posed regarding non-consensual intimate deepfakes and explain the intent behind the Deepfake Digital Harm and Exploitation Bill. It will explore how we can protect victims while still supporting AI in a way that strengthens rather and undermines a free and responsible society.

Video
20 March Rutherford House, RH1209/1210 Matti Schneider (Open Terms Archive, OpenFisca)

A discussion about digital governance frameworks

In this talk, I’ll present two of the digital governance initiatives I’m working on. OpenFisca is a rules-as-code system used around the world to model tax and benefit systems. I established the OpenFisca Association to govern the way OpenFisca as an open source project is maintained, used, funded and developed over time. I also established Open Terms Archive, an open-source project that systematically retrieves the terms and conditions from technology companies (including social media and AI companies) for archival, research and accountability purposes, making them available as open datasets for re-use. These projects have been recognised internationally, including as digital public infrastructure, digital public goods, and digital commons. After presenting these two governance initiatives, I’ll discuss how digital governance frameworks could be applied to sovereign AI projects, and other digital infrastructure.

Video
27 March Rutherford House, RH1209/1210 Jeanelle Frontin, Kevin Licorish (Createch Labs, VUW Engineering and Computer Science)

The Sovereignty of the Seed: Preserving Human Authorship in the Age of Interpretive AI

As the legal landscape of 2026 shifts toward stricter definitions of AI-generated "copying" and "authorship," a critical question remains: can AI amplify creativity without diluting the creator's sovereignty? This seminar introduces a paradigm shift from Generative AI (making content from scratch) to Interpretive AI (mapping and extending human intent). We explore three real-world products: a conversational literary avatar, a musician’s audio-visualizer, and a symbolic map for visual artists. Through them, we demonstrate how the "Primary Creative Seed" (the novel, the song, the sketch) remains the master IP. By using AI as an "Interpretive Layer" rather than a creator, we show how artists can use LLMs, RAG, and Computer Vision to interrogate their own work, deepening the creative process while maintaining a "copyright fortress" around their human-authored core.

Video

(starts part-way through!)

3 April   Good Friday No seminar!  
10 April   Mid-trimester break No seminar!  
17 April   Mid-trimester break No seminar!  
24 April Rutherford House, RH1209/1210 Matt Farrington (VUW Legal Counsel)

Gen AI and 'copying' (Or how I learned to stop worrying about copyright and love the contract)

In this seminar, my plan is to focus on a few of the technical realities of how LLMs and training actually work, before moving on to how contract law is short-circuiting the discussions about what might/might not be copying. I'll be using "copying" as a shorthand for the much wider issue of what uses are actually permitted or banned by terms and conditions. I'll be aiming to make this interactive with some audience polling to gauge sentiment - I have my own views but I'm really keen to see how the sentiments play out.

I won't be offering any neat legal solutions. My goal is really just to raise the (uncomfortable?) practical considerations we all need to face about the current landscape, and then explicitly tee up the audience for the second copyright lecture next week.

Video
1 May

Note different room!

Rutherford House, RH201

Graeme Austin (VUW Law), Daniel Watterson (Copyright Licensing NZ)

Copyright Infringement in the AI Context: How Markets for Content are Relevant to Defences

There are dozens of pending lawsuits involving copyright claims against artificial intelligence (AI) platforms. In one US, the judge case summed up what’s on the line when he said:

These products are expected to generate billions, even trillions, of dollars for the companies that are developing them. If using copyrighted works to train the models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.

These cases have been fought so hard because the stakes seem existential. Authors’ livelihoods are at risk. Copyright-based industries – publishing, music, film, photography, design, television, software, computer games – face obliteration, as generative AI platforms scrape, copy and analyse massive amounts of copyright-protected content. On the other side, some in the tech sector say copyright is holding up the development of AI models and products.

Graeme Austin (VUW Law Faculty) and Daniel Watterson (Contracts and Policy Manager / Kaiwhakahaere Pakihi, Copyright Licensing New Zealand) will discuss the role of copyright exceptions and defences to the uses of copyright-protected materials by AI firms. Along the way, the session will discuss how US courts might apply the fair use exception, the relevance of the EU AI Act, and how emerging markets for content might inform legal and policy analysis in the AI context. Graeme discussed some of these issues in a recent Conversation piece (see here).

 
8 May

Note different room!

Rutherford House, RH201

Sally Jane Norman, Dugal McKinnon, Jim Murphy (Te Kōkī / New Zealand School of Music)

AI in music and sonic arts (Dugal's slides; Sally Jane's slides; Jim's slides)

Coming from Te Kōkī / the New Zealand School of Music, we are of course aware of the impact of AI music generators like Suno, Udio, MusicLM, Lyria, and others. We are aware of the cognitive and creative risks of such systems that, to quote Ada Lovelace, have 'no pretentions to originate anything’, but instead engage in an endless reshuffling of source materials that are increasingly unmoored from their own origins. We are also strongly committed to the unique ways musicking can host and nurture human—and other-than-human—co-creative dynamics and experimentation, opening up spaces for performance and critical dialogue across our ever-extended minds.

Brief presentations by Dugal McKinnon, Jim Murphy, and Sally Jane Norman will reference ongoing work in our respective fields. Our proposals and conversation are offered as, hopefully, a springboard for discussion regarding AI in music and sonic arts, set in its wider ethical and societal context.

 
15 May Rutherford House, RH1209/1210 Tatenda Tavingeyi (VUW Humanities and Social Sciences, AI for Cultural Heritage Institute, An AI of our Own)

AI in the Developing World: Focus on Africa and Southeast Asia through An AI of Our Own (AAOO)

AI has become the most dominant and talked about technology of the 21st century. Its impacts and potential impacts as a tool have been felt and debated in various sectors including health, climate science, and heritage preservation through language. Its promise is real, but so is the danger of blindly accepting it. The growing “hype” across diverse global communities mainly led by big players in the sector has led to the rise of what Karen Hao (2025), an investigator calls Empires of AI in her book (Empire of AI: Inside the reckless race for total domination), Karen argues that the race by each empire is to convince the world that its competitor is the worst empire whose solutions have no good intentions for humanity. In her statement from the book’s abstract, “We have entered a new, ominous age of empire with OpenAI setting a breakneck pace, as a small group of the most valuable companies in human history try to chase it down.” she contends that the dominant AI development work has transformed into a modern-day imperialistic project with less to no democratic and ethical oversight. During his interview with Steven Barlett on The Dairy of A CEO (Nov 27, 2025), Tristan Harris, a technology ethicist, asserts that the race is primarily determined by six individuals for whom the global community has not collectively consented to make decisions regarding this technology on behalf of eight billion individuals. The central ideology to this race in Tristan’s view is shaped by the "I built it first” ideology and not on its ethical ground. If one must wonder where these empires are being built from, guess what they are? Mostly Western origin. A very uncomfortable reality which should not be surprising that when bringing out discussions about the technology, including its development and ethics, mainstream understanding is that the Developing World is always a passive recipient of these technologies without interrogating their ethical grounds. Here, I intend to demonstrate how the developing world is challenging mainstream narratives implying that it is merely catching up with AI use and the “empirical” frameworks for its development. Focusing on, An AI of Our Own (AAOO)’s work, a Global South community-driven project, I will share how we envision a future where ethical, community-centered AI replaces extractive data models highlighted, advancing sustainable community stewardship of tools and data through advocacy, applied model development, and interdisciplinary research.

 
22 May Rutherford House, RH1209/1210

Rebecca Downes (VUW School of Management)

Jocelyn Cranefield (VUW School of Information Management)

Morally Repugnant AI? An Alternative Lens for Predicting The Future of Work

One of the most common questions about AI concerns the future of work. Will artificial intelligence usher in a post-work society, or trigger mass unemployment and social unrest? At a more immediate level, most people at least want to know: is my own job safe?

In a recent paper, Dr Jocelyn Cranefield, Dr Mian Wu, and I draw on interview data to argue that people rely on ethics-informed heuristics when deciding where to use (and not use) generative AI in their professional roles. These ethics-informed heuristics shape individual use and thereby influence where AI tools might gain traction in organisations. I’ll explain how our findings fit with a recent Harvard working paper, which uses a tidy piece of research design to distinguish performance-based resistance to AI (for example, that the system is not yet good enough) from principles-based resistance (that using AI feels morally wrong). The authors conclude that some jobs are likely to remain categorically off-limits, not because they cannot be automated, but because such automation is experienced as morally repugnant.

Taken together, the key question for predicting job displacement may not be whether a job can be automated, but what that job means—socially and morally. I suggest that the boundary between performance and principle offers a compelling basis for predicting where disruption is likely to occur in the workforce, and where it will meet durable resistance. I’ll be interested to hear whether the AI & Society community agrees.

 
29 May Rutherford House, RH1209/1210 Hannah Betts (recently at VUW School of Science in Society, soon to start a job with the Talos Network)

A snapshot of New Zealand news media reporting on AI: Examining the Gaps, Blind Spots, and Barriers in How AI is Covered in Our News Media

This presentation will present recent research into New Zealand's news media coverage of AI, completed as part of a Masters by thesis at Victoria University. The research analysed which risks and benefits are being discussed, and the level of explanation provided when referencing AI risks. This was followed by interviews with journalists to understand why this type of reporting was observed, and identify barriers they experienced when reporting on AI.

Hannah will discuss features of news reporting on AI including potential blind-spots that were noted, such as low frequency of reference to bias, misalignment in AI systems, and use of AI to assist cyberattacks, in contrast to high academic and/or public attention towards these risks. Additionally, Hannah will present a taxonomy of AI Benefits, created in this research through inductive analysis, and providing a new tool for considering relevant trade-offs posed by the dual-use nature of AI.

Hannah also interviewed many journalists, to learn about their experiences reporting on AI. Hannah will summarise these interactions, and reflect on how their approaches impact New Zealand's news audiences, as well as other journalists and those looking to support journalists to report on AI news.

 

Trimester 2

Date Venue Speaker Title/topic  
17 July Rutherford House, RH201 (note different room!)      
24 July Rutherford House, RH201 (note different room!) Andrew Chen AI in Court: How will we handle AI-Derived Evidence?  
31 July Rutherford House, RH1209/1210 John Daniel Trask (Raygun.com)    
7 August Rutherford House, RH1209/1210      
14 August Rutherford House, RH1209/1210      
Mid-trimester break!
4 September Rutherford House, RH1209/1210 Wayne Patrick (VUW School of Biological Sciences) The AI-driven future of biotechnology in New Zealand  
11 September Rutherford House, RH1209 Hercules Konstantopoulos (Malaghan Institute of Medical Research) All the time in the world: AI-driven computation hones lab work and frees human hands and brains for creative work  
18 September Rutherford House, RH1209/1210      
25 September Rutherford House, RH1209/1210      
2 October Rutherford House, RH1209/1210      
9 October Rutherford House, RH1209/1210 Dave Moskovitz (thinktank consulting), Harisu Shehu (VUW), Matthew Bartlett (VUW, thinkstep) AI and religion