AI and Society Seminar Series - clearing the decks for 2026!
These seminars consider AI's social impacts in a range of areas, and discuss how AI can best be overseen to maximise positive impacts and minimise negative ones. They are open to anyone who is interested in attending. We don't presuppose any technical background.. if we present AI systems, we do it 'from scratch', in ways that should be accessible to everyone. Discussing AI's social impacts is a very interdisciplinary task, and our speakers and participants come from many different backgrounds, in academia, government, industry and NGOs. The series organiser is Ali Knott (ali.knott@vuw.ac.nz): please email Ali if you'd like to be added to the mailing list. Details of previous years' seminars can be found here: 2023 (22 seminars), 2024 (24 seminars), 2025 (23 seminars)Seminars are at 4:00-5:30pm.
Trimester 1 (Jump to Trimester 2)
| Date | Venue | Speaker | Title/topic | |
|---|---|---|---|---|
| 27 Feb | Rutherford House, RH1209/1210 | Ali Knott (VUW) | What's new in AI and AI governance in 2026? | |
| 6 March | Rutherford House, RH1209/1210 | Panel: Andrew Ruthven (Catalyst), Tom Barraclough (Brainbox), Ali Knott (VUW) | Can New Zealand develop its own 'sovereign large language model'? | |
| 13 March | Rutherford House, RH1209/1210 | Keoni Mahelona (Te Hiku Media) | Talk (title TBA) on Māori language modelling | |
| 20 March | Rutherford House, RH1209/1210 | Tatenda Tavingeyi (VUW Humanities and Social Sciences, AI for Cultural Heritage Institute) | AI in the developing world | |
| 27 March | Rutherford House, RH1209/1210 | Jeanelle Frontin, Kevin Licorish (Createch Labs, VUW Engineering and Computer Science) | The Sovereignty of the Seed: Preserving Human Authorship in the Age of Interpretive AI As the legal landscape of 2026 shifts toward stricter definitions of AI-generated "copying" and "authorship," a critical question remains: can AI amplify creativity without diluting the creator's sovereignty? This seminar introduces a paradigm shift from Generative AI (making content from scratch) to Interpretive AI (mapping and extending human intent). We explore three real-world products: a conversational literary avatar, a musician’s audio-visualizer, and a symbolic map for visual artists. Through them, we demonstrate how the "Primary Creative Seed" (the novel, the song, the sketch) remains the master IP. By using AI as an "Interpretive Layer" rather than a creator, we show how artists can use LLMs, RAG, and Computer Vision to interrogate their own work, deepening the creative process while maintaining a "copyright fortress" around their human-authored core. |
|
| 3 April | Good Friday | No seminar! | ||
| 10 April | Mid-trimester break | No seminar! | ||
| 17 April | Mid-trimester break | No seminar! | ||
| 24 April | Rutherford House, RH1209/1210 | Matt Farrington (VUW Legal Counsel) | Gen AI and Copyright: Are current legal definitions of 'copying' sufficient to deal with the process of AI training? | |
| 1 May | Room TBA | Graeme Austin (VUW Law), Daniel Watterson (Copyright Licensing NZ) | Gen AI and Copyright: A survey of recent copyright cases brought against Gen AI companies around the world | |
| 8 May | Room TBA | Sally Jane Norman, Dugal McKinnon, Jim Murphy, Mo Zareei (New Zealand School of Music) | AI in music and sonic arts | |
| 15 May | Rutherford House, RH1209/1210 | |||
| 22 May | Rutherford House, RH1209/1210 | Rebecca Downes (VUW School of Management) Jocelyn Cranefield (VUW School of Information Management)
|
Morally Repugnant AI? An Alternative Lens for Predicting The Future of Work One of the most common questions about AI concerns the future of work. Will artificial intelligence usher in a post-work society, or trigger mass unemployment and social unrest? At a more immediate level, most people at least want to know: is my own job safe? In a recent paper, Dr Jocelyn Cranefield, Dr Mian Wu, and I draw on interview data to argue that people rely on ethics-informed heuristics when deciding where to use (and not use) generative AI in their professional roles. These ethics-informed heuristics shape individual use and thereby influence where AI tools might gain traction in organisations. I’ll explain how our findings fit with a recent Harvard working paper, which uses a tidy piece of research design to distinguish performance-based resistance to AI (for example, that the system is not yet good enough) from principles-based resistance (that using AI feels morally wrong). The authors conclude that some jobs are likely to remain categorically off-limits, not because they cannot be automated, but because such automation is experienced as morally repugnant. Taken together, the key question for predicting job displacement may not be whether a job can be automated, but what that job means—socially and morally. I suggest that the boundary between performance and principle offers a compelling basis for predicting where disruption is likely to occur in the workforce, and where it will meet durable resistance. I’ll be interested to hear whether the AI & Society community agrees. |
|
| 29 May | Rutherford House, RH1209/1210 |
Trimester 2
| Date | Venue | Speaker | Title/topic | |
|---|---|---|---|---|
| TBA! |