AI and Society Seminar Series

These seminars consider AI's social impacts in a range of areas, and discuss how AI can best be overseen to maximise positive impacts and minimise negative ones.

They are open to anyone who is interested in attending. We don't presuppose any technical background.. if we present AI systems, we do it 'from scratch', in ways that should be accessible to everyone.

Discussing AI's social impacts is a very interdisciplinary task, and our speakers and participants come from many different backgrounds, in academia, government, industry and NGOs.

The series organiser is Ali Knott ( please email Ali if you'd like to be added to the mailing list.

Details of previous years' seminars can be found here: 2023

Seminars are at 4:00-5:30pm in T1, and at 4:30-5:30pm in T2 (unless otherwise specified).

Trimester 1:

1 March (Rutherford House, RH1209/1210)
This seminar will provide an overview of the EU Artificial Intelligence Act, a landmark proposal poised to become the first comprehensive regulation of AI globally. We will discuss the Act's scope, detailing who will be affected and the exceptions that apply, such as for open-source initiatives. The presentation will further explore the Act's categorization of AI systems, highlighting the key responsibilities and considerations for each category. We will also delve into the enforcement strategies, and the anticipated challenges and complexities in implementing the Act, aiming to provide an understanding of its potential impact on the global AI landscape. Additionally, we will address the stance that New Zealand could adopt in relation to this regulation.

8 March (Rutherford House, RH1209/1210)
This seminar will provide an update on new developments in AI over the last few months. We'll cover:
- Google/DeepMind's Gemini multimodal generator (Pro 1.5, Ultra and Nano versions);
- OpenAI's Sora (which produces videos from text prompts);
- Google/DeepMind's Alpha Geometry (which solves geometry problems at Olympiad level);
- Microsoft's Copilot generator (now integrated into many Microsoft products), with some use cases for GPT;
- 1X's EVE humanoid robot, and the new Rabbit personal companion / operating system (both powered by 'large action models').

15 March (Rutherford House, RH1209/1210)
The EU's AI Act imposes some new obligations on generative AI companies, to support detection of the content their generators produce. Biden's Executive Order on AI also makes some requirements on companies in this area. In this seminar I'll introduce these new obligations, and discuss how they could be met. I'll also flag some promising-looking methods for identifying AI-generated images, which seem to work reliably (at present) even without support from companies.

22 March (Rutherford House, RH1209/1210)
Like other countries, New Zealand is considering how it can maximise the benefits and minimise harms of AI technologies and the way they’re deployed. But New Zealand has some unique characteristics too and it’s likely we’ll require a broader regulatory response than a single piece of legislation. Tom will speak to the EU AI Act as an international example, and outline where initiatives like the NZ AI Policy Tracker can play a role in our domestic response.

29 March: Good Friday, no seminar!

5 April: Mid-trimester break, no seminar!

12 April: Mid-trimester break, no seminar!

19 April (Rutherford House, RH1209/1210)
  • David Talbot (Talbot Mills Research): Making myself redundant: A market researcher's experiments with gen AI
We’d assumed that qualitative research would perhaps be one of the later areas to be disrupted by AI. Surely the special respondent/facilitator dynamic would be impossible to replicate. Initial experiments getting AI to ask - rather than answer - questions are encouraging however. In this session I’ll demonstrate the prototype market research tool we’ve built and make some observations on strengths and weaknesses. I’ve still got a job for now, but for how much longer?

26 April (Rutherford House, RH1209/1210)
  • Jess Robertson (Chief Scientist, High Performance Computing and Data Science, NIWA): Making decisions in flux: challenges for the use of AI in policymaking and regulatory stewardship
We often approach AI in governmental decision making from an assumption of a fixed decision-making framework (eg existing legislation, an existing regulatory system). This is a useful assumption from an ML perspective because it lets us optimize our approach for nice mathematical metrics like predictive accuracy rather than fuzzy concepts like ‘fairness’. However, the reality is many of our regulatory systems are trying to achieve other aims, including transparency or timeliness of decision making, or early signalling of potentially challenging decisions to give regulated parties time to adjust and accept outcomes in which they might be worse off. These are challenges that ML and AI can help with but I will argue in this talk that it will require a better dialogue between the tools we create and the frameworks we use for decision making.

3 May (Rutherford House, RH1209/1210)
  • Sean Audain (Wellington City Council) and Jocelyn Cranefield (School of Information Management, VUW): AI for civic consultation and communication.
Sean will talk about 'Using AI to connect the city with its people: opportunities and challenges'. Jocelyn will talk about the 'Giving Voice to the City' project.

10 May (**Note different venue! Rutherford House Mezzanine floor, RHMZ54**)
In this seminar, Ali will review the emerging debate between proponents of 'open-source' and 'closed-source' generative AI models, pitting the newly-founded AI Alliance against the scarcely-older Frontier Model Forum, and argue that some of the some of the dilemmas in this debate are minimised if parts of large AI systems are owned and administered in the public domain. Then Markus will propose that data stewardship is a good example of a piece of AI infrastructure that belongs in the public domain.

17 May (Rutherford House, RH1209/1210)
  • Chris Cormack (Catalyst Ltd): Data sovereignty: Why it matters in Aotearoa NZ

    Royal Society Te Apārangi's recent report, Mana Raraunga Data Sovereignty, gives an overview of the concepts of data sovereignty, Indigenous data sovereignty, and Māori data sovereignty. In this digital era, when data has become hugely valuable and a source of power, these concepts are helping guide answers to questions about who owns, controls, and protects our data. The report also summarises new data practices that are emerging to create a data future that benefits us all.

24 May (Rutherford House, RH1209/1210)
  • Practical LLM projects at Te Herenga Waka
In this seminar we will hear about five projects exploring practical uses of AI language models at VUW.
  • An introduction to the new AI research tools available in the Library (Marcus Harvey, VUW Library)
  • An introduction to what’s being done in VUW’s Working Group on AI for teaching and learning (Stella McIntosh VUW Academic Integrity, Robert Stratford, VUW Academic Office)
  • An introduction to the student-facing AI Puaha project, building an AI chatbot offering student advice (Ali will describe what he knows here)
  • An introduction to the staff-facing policy chatbot project (Matt Farrington, VUW Legal Counsel)
  • An introduction to VUW’s Policy Hub AI project, offering advice to government departments on LLMs (Andrew Jackson, VUW Policy Hub, Simon McCallum, VUW Computer Science)

31 May - AI in the pub!
  • This is the last week of Trimester, so we'll relocate to the pub for this edition! We'll meet at the usual time (4pm) at the Thistle Inn, very near Rutherford House (3 Mulgrave St). We'll be in the marquee - all welcome!

Winter break

Trimester 2:

12 July (Rutherford House, **RHMZ03*): *Note venue is different from usual!*
  • Johniel Bocacao (Engineering and Computer Science, VUW; Policy Evidence and Insights, ACC) - AI and Algorithms in Government 101

    This presentation is a crash course on the use of algorithms and artificial intelligence in New Zealand’s public sector, and the considerations and legal obligations of agencies employing these technologies. It draws on examples from a 2018 algorithm assessment stocktake across Government, which resulted in recommendations that formed the basis of the Algorithm Charter, a voluntary commitment to safe and transparent algorithm use in the public service. This history sets the context for Johniel’s recently initiated research into what a future evolution of AI and algorithm governance in New Zealand could look like, particularly in a new era both technologically (as generative AI throws a spanner in the works) and politically (with a renewed focus on using advanced analytics to inform policy and investment).

    Later this year, Johniel will be conducting user research into firsthand experiences of both developers and governance/risk advisors in algorithm deployment and governance. Flick him a message at or at if you or someone you know would be keen to chat!

19 July (Rutherford House, RH1209/1210):
  • Emma MacDonald (Centre for Data Ethics and Innovation, Stats NZ): The Algorithm Charter (and other ways to build trust)
Following on from Johniel’s talk last week, this talk will dive deeper into the Algorithm Charter and the other key pieces of guidance that the Government uses to support the use of ethical data across government (Ngā Tikanga Paihere, 5 Safes, the Data Protection and User Policy (DPUP) and the Privacy, Human Rights and Ethics framework (PHRaE)).

The talk will cover off the recent changes and developments with the guidance and cover off how Stats is looking to address some of the recommendation the Taylor Fry review of the Algorithm Charter.

26 July (Rutherford House, RH1209/1210):
  • Paul Duignan (Paul Duignan Consulting): How to talk about AI more strategically
We don't yet have the right language for having rich discussions about AI from a strategic, social, economic, and psychological perspective. If we were going on a road trip, we would talk about map apps, charging stations, and our accommodation. We wouldn't find talking about drive shafts, steering columns, and wheel alignment particularly useful. We do not yet have language at the right level for non-technologists to strategically and usefully talk about AI, its multiple opportunities, and its substantial risks. Paul will talk about some of the new terms he is introducing in his new book Surfing AI: 30 Fresh Terms and Smarter Ways of Talking About Artificial Intelligence, such as: AI-ology, AGI-ism, AI compelled, AI gotcha, AI-induced hyper-irritability, knowledgeability, AI vertigo, app cannibalization, autonomization imperative, AI concealism, polycreativity, synesthetic communication, Eye of God AI, experiential optimization, firewalled communities, frictionless experience, ideaspheres, infonakedness, metafactuals, node-ism, nudgability, P(To Know But Not Be), reality hunger, trustability, and virtualization ethics.

2 August (Rutherford House, RH1209/1210):
  • Bronwyn Howell (VUW School of Management)
9 August (Rutherford House, RH1209/1210):
  • TBA
16 August (Rutherford House, RH1209/1210):
  • TBA
23 August: Mid-trimester break, no seminar!

30 August: Mid-trimester break, no seminar!

6 September (Rutherford House, RH1209/1210):
  • Robby Nadler (VUW Linguistics and Applied Language Sciences): Reimagining writing in the age of AI
When prompting ChatGPT to write an abstract for a talk about how writing needs to be re-imagined in the age of generative AI, the output was:

In the rapidly evolving landscape of artificial intelligence, generative AI technologies are revolutionizing the way we create and consume written content. This talk explores the profound implications of AI on writing, emphasizing the need to re-imagine traditional practices and adapt to new paradigms. We will delve into the capabilities of generative AI, such as producing coherent and contextually relevant text, and its potential to enhance creativity, productivity, and accessibility in writing. Additionally, we will address ethical considerations, including authorship, originality, and bias, highlighting the importance of responsible AI use. By examining case studies and real-world applications, this talk aims to provide insights into how writers, educators, and content creators can harness the power of generative AI to innovate and thrive in this transformative era.

There are some nifty ideas there, and this talk will be about some of them. But such a conception also seems to fundamentally not understand writing. This is key because how are we as a society to understand what writing will become if we do not understand what writing is? Writing is a mode of communication, of thinking, of expression. It is also a cognitive anomaly that is mentally taxing and time intensive. This is to say nothing of sociocultural dimensions that define writing as they relate to concepts such as quality, genre, and narrative. That is, how humans use writing and the ways generative AI is likely to fundamentally change writing in education, professional, and social spheres cannot be monolithic. As such, this talk will explore how researchers conceptualize writing and then discuss the myriad ways AI is likely to transform how humans interact with the broad skill we call writing.

13 September (Rutherford House, RH1209/1210):
  • TBA

20 September (Rutherford House, RH1209/1210):
  • TBA
27 September (Rutherford House, RH1209/1210):
  • TBA
4 October (Rutherford House, RH1209/1210):
  • TBA
11 October (Rutherford House, RH1209/1210):
  • TBA