AI and Society Seminar Series
These seminars consider AI's social impacts in a range of areas, and discuss how AI can best be overseen to maximise positive impacts and minimise negative ones.
They are open to anyone who is interested in attending. We don't presuppose any technical background.. if we present AI systems, we do it 'from scratch', in ways that should be accessible to everyone.
Discussing AI's social impacts is a very interdisciplinary task, and our speakers and participants come from many different backgrounds, in academia, government, industry and NGOs.
The series organiser is
Ali Knott (
ali.knott@vuw.ac.nz): please email Ali if you'd like to be added to the mailing list.
Details of previous years' seminars can be found here:
2023
Seminars are at 4:00-5:30pm in Trimester 1, and at 4:30-5:30pm in Trimester 2 (unless otherwise specified).
Trimester 1:
1 March (Rutherford House, RH1209/1210)
This seminar will provide an overview of the EU Artificial Intelligence Act, a landmark proposal poised to become the first comprehensive regulation of AI globally. We will discuss the Act's scope, detailing who will be affected and the exceptions that apply, such as for open-source initiatives. The presentation will further explore the Act's categorization of AI systems, highlighting the key responsibilities and considerations for each category. We will also delve into the enforcement strategies, and the anticipated challenges and complexities in implementing the Act, aiming to provide an understanding of its potential impact on the global AI landscape. Additionally, we will address the stance that New Zealand could adopt in relation to this regulation.
8 March (Rutherford House, RH1209/1210)
This seminar will provide an update on new developments in AI over the last few months. We'll cover:
- Google/DeepMind's Gemini multimodal generator (Pro 1.5, Ultra and Nano versions);
- OpenAI's
Sora (which produces videos from text prompts);
- Google/DeepMind's
Alpha Geometry (which solves geometry problems at Olympiad level);
15 March (Rutherford House, RH1209/1210)
The EU's AI Act imposes some new obligations on generative AI companies, to support detection of the content their generators produce. Biden's Executive Order on AI also makes some requirements on companies in this area. In this seminar I'll introduce these new obligations, and discuss how they could be met. I'll also flag some promising-looking methods for identifying AI-generated images, which seem to work reliably (at present) even without support from companies.
22 March (Rutherford House, RH1209/1210)
Like other countries, New Zealand is considering how it can maximise the benefits and minimise harms of AI technologies and the way they’re deployed. But New Zealand has some unique characteristics too and it’s likely we’ll require a broader regulatory response than a single piece of legislation. Tom will speak to the EU AI Act as an international example, and outline where initiatives like the
NZ AI Policy Tracker can play a role in our domestic response.
29 March:
Good Friday, no seminar!
5 April:
Mid-trimester break, no seminar!
12 April:
Mid-trimester break, no seminar!
19 April (Rutherford House, RH1209/1210)
- David Talbot (Talbot Mills Research): Making myself redundant: A market researcher's experiments with gen AI
We’d assumed that qualitative research would perhaps be one of the later areas to be disrupted by AI. Surely the special respondent/facilitator dynamic would be impossible to replicate. Initial experiments getting AI to ask - rather than answer - questions are encouraging however. In this session I’ll demonstrate the prototype market research tool we’ve built and make some observations on strengths and weaknesses. I’ve still got a job for now, but for how much longer?
26 April (Rutherford House, RH1209/1210)
- Jess Robertson (Chief Scientist, High Performance Computing and Data Science, NIWA): Making decisions in flux: challenges for the use of AI in policymaking and regulatory stewardship
We often approach AI in governmental decision making from an assumption of a fixed decision-making framework (eg existing legislation, an existing regulatory system). This is a useful assumption from an ML perspective because it lets us optimize our approach for nice mathematical metrics like predictive accuracy rather than fuzzy concepts like ‘fairness’. However, the reality is many of our regulatory systems are trying to achieve other aims, including transparency or timeliness of decision making, or early signalling of potentially challenging decisions to give regulated parties time to adjust and accept outcomes in which they might be worse off. These are challenges that ML and AI can help with but I will argue in this talk that it will require a better dialogue between the tools we create and the frameworks we use for decision making.
3 May (Rutherford House, RH1209/1210)
- Sean Audain (Wellington City Council) and Jocelyn Cranefield (School of Information Management, VUW): AI for civic consultation and communication.
Sean will talk about 'Using AI to connect the city with its people: opportunities and challenges'. Jocelyn will talk about the 'Giving Voice to the City' project.
10 May (
**Note different venue! Rutherford House Mezzanine floor, RHMZ54**)
In this seminar, Ali will review the emerging debate between proponents of 'open-source' and 'closed-source' generative AI models, pitting the newly-founded
AI Alliance against the scarcely-older
Frontier Model Forum, and argue that some of the some of the dilemmas in this debate are minimised if parts of large AI systems are owned and administered in the public domain. Then Markus will propose that data stewardship is a good example of a piece of AI infrastructure that belongs in the public domain.
17 May (Rutherford House, RH1209/1210)
- Chris Cormack (Catalyst Ltd): Māori data sovereignty: Why it matters in Aotearoa NZ
Royal Society Te Apārangi's recent report, Mana Raraunga Data Sovereignty, gives an overview of the concepts of data sovereignty, Indigenous data sovereignty, and Māori data sovereignty. In this digital era, when data has become hugely valuable and a source of power, these concepts are helping guide answers to questions about who owns, controls, and protects our data. The report also summarises new data practices that are emerging to create a data future that benefits us all.
24 May (Rutherford House, RH1209/1210)
- Practical LLM projects at Te Herenga Waka
In this seminar we will hear about five projects exploring practical uses of AI language models at VUW.
- An introduction to the new AI research tools available in the Library (Marcus Harvey, VUW Library)
- An introduction to what’s being done in VUW’s Working Group on AI for teaching and learning (Stella McIntosh VUW Academic Integrity, Robert Stratford, VUW Academic Office)
- An introduction to the student-facing AI Puaha project, building an AI chatbot offering student advice (Ali will describe what he knows here)
- An introduction to the staff-facing policy chatbot project (Matt Farrington, VUW Legal Counsel)
- An introduction to VUW’s Policy Hub AI project, offering advice to government departments on LLMs (Andrew Jackson, VUW Policy Hub, Simon McCallum, VUW Computer Science)
31 May - AI in the pub!
- This is the last week of Trimester, so we'll relocate to the pub for this edition! We'll meet at the usual time (4pm) at the Thistle Inn, very near Rutherford House (3 Mulgrave St). We'll be in the marquee - all welcome!
Winter break
Trimester 2:
12 July (Rutherford House,
**RHMZ03*):
*Note venue is different from usual!*
- Johniel Bocacao (Engineering and Computer Science, VUW; Policy Evidence and Insights, ACC) - AI and Algorithms in Government 101
This presentation is a crash course on the use of algorithms and artificial intelligence in New Zealand’s public sector, and the considerations and legal obligations of agencies employing these technologies. It draws on examples from a 2018 algorithm assessment stocktake across Government, which resulted in recommendations that formed the basis of the Algorithm Charter, a voluntary commitment to safe and transparent algorithm use in the public service. This history sets the context for Johniel’s recently initiated research into what a future evolution of AI and algorithm governance in New Zealand could look like, particularly in a new era both technologically (as generative AI throws a spanner in the works) and politically (with a renewed focus on using advanced analytics to inform policy and investment).
Later this year, Johniel will be conducting user research into firsthand experiences of both developers and governance/risk advisors in algorithm deployment and governance. Flick him a message at LinkedIn.com/in/johniel or at johniel.bocacao@vuw.ac.nz if you or someone you know would be keen to chat!
19 July (Rutherford House, RH1209/1210):
Following on from Johniel’s talk last week, this talk will dive deeper into the Algorithm Charter and the other key pieces of guidance that the Government uses to support the use of ethical data across government (Ngā Tikanga Paihere, 5 Safes, the Data Protection and User Policy (DPUP) and the Privacy, Human Rights and Ethics framework (PHRaE)).
The talk will cover off the recent changes and developments with the guidance and cover off how Stats is looking to address some of the recommendation the
Taylor Fry review of the Algorithm Charter.
26 July (Rutherford House, RH1209/1210):
- Paul Duignan (Paul Duignan Consulting): How to talk about AI more strategically
We don't yet have the right language for having rich discussions about AI from a strategic, social, economic, and psychological perspective. If we were going on a road trip, we would talk about map apps, charging stations, and our accommodation. We wouldn't find talking about drive shafts, steering columns, and wheel alignment particularly useful. We do not yet have language at the right level for non-technologists to strategically and usefully talk about AI, its multiple opportunities, and its substantial risks. Paul will talk about some of the new terms he is introducing in his new book Surfing AI: 30 Fresh Terms and Smarter Ways of Talking About Artificial Intelligence, such as: AI-ology, AGI-ism, AI compelled, AI gotcha, AI-induced hyper-irritability, knowledgeability, AI vertigo, app cannibalization, autonomization imperative, AI concealism, polycreativity, synesthetic communication, Eye of God AI, experiential optimization, firewalled communities, frictionless experience, ideaspheres, infonakedness, metafactuals, node-ism, nudgability, P(To Know But Not Be), reality hunger, trustability, and virtualization ethics.
2 August (Rutherford House, RH1209/1210):
This presentation/discussion draws on the first part of Bronwyn’s Research and Study Leave adventure in Washington DC investigating industry and government responses to the challenges of “regulating AI” – a.k.a. “the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. A specific focus has been on exploring the distinctions between risk and uncertainty in the development and use of GPTs.
The presentation will compare and contrast the risk management basis for the regulations embodied in the EU AI Act and the industry self-governance-led principles of the US National Institute of Standards and Technology voluntary Risk Management Framework with an emphasis on the extent to which each of these frameworks manages the challenges presented by GPTs. GPTs differ from the big data-inspired algorithms which have underpinned the ISO 31000-based enterprise risk management processes that have informed the development of both the EU and NIST frameworks, in that they are also “general purpose technologies” where their subsequent adaptation and use will occur in a very different institutional context from the “big data algorithms”. This poses new and different challenges for both their original developers and subsequent adapters. Unsurprisingly, different Washington stakeholders have different views on how they and governments (State and Federal) should respond, as have the governments themselves.
Dr Howell has a PhD in economics and public policy, an MBA, and a BA in operations research, all from Victoria University of Wellington in New Zealand. She is Senior Lecturer in the School of Management at Victoria University of Wellington, a non-resident Senior Fellow at the American Enterprise Institute in Washington DC, where she focuses on the regulation, development, and deployment of new technologies, a Senior Research Associate of the Public Utilities Research Center at the University of Florida and Research Principal of the Institute for Telecommunications and Network Economics. She is also a Board Member and Secretary of the International Telecommunications Society and Fellow of the Law and Economics Association of New Zealand. Prior to entering academia she spent 15 years in the ICT industry. Between 1999 and 2014, she was variously Research Principal, Acting Director and General Manager of the New Zealand Institute for the Study of Competition and Regulation.
8 August (out-of-schedule talk on the Kelburn campus): Cheng Kai (CK) Jin (Clinical Director, AI Laboratory, Te Whatu Ora):
AI Governance within Health New Zealand The talk will discuss the AI governance process within Health New Zealand, provide information on the frameworks used to evaluate AI tools, provide real-world examples of proposals that have undergone review, and discuss the challenges faced throughout the governance journey. Attendees will gain a deeper understanding of the ethical, legal, and regulatory considerations involved in developing and implementing AI in healthcare.
9 August (Rutherford House, RH1209/1210):
Kevin Shedlock (VUW Engineering / Computer Science): Digital mātauranga and AI
- Māori digital knowledge (mātauranga) is embedded within the context of tradition, relationships and ceremony. In this presentation, I highlight some of my research intersecting Western and Indigenous methodologies during the construction of the intelligent IT artefact.
16 August (Rutherford House, RH1209/1210): Udayan Mukherjee, Benjamin Stubbing (Treasury): AI’s implications for economic policy: Thoughts on theory and practice
This discussion will present a dual perspective on how AI is being explored in the Treasury. First, a discussion on economic policy implications for NZ, drawing on the emerging theory and evidence on AI in the economics discipline. Second, an introduction to some nascent applications of AI to the practical work of a government agency that is responsible for producing analysis and advice on economic policy.
The first part of the talk will draw largely on this
recently published Analytical Note, which introduced economic frameworks for assessing the impact of AI and provided a qualitative assessment of the implications of AI for New Zealand. In discussion we will also talk about how the debate has moved even in the few months since we finalised the paper. The second part of the talk will show how the Analytics and Insights team—Treasury’s microsimulation modelling and research team—are using AI to democratise their analytical tools to improve the quality of Treasury’s advice.
23 August:
Mid-trimester break, no seminar!
30 August:
Mid-trimester break, no seminar!
6 September (Rutherford House, RH1209/1210):
John Penn, Adobe: Making a Difference : Understanding, Immersion and Involvement
-
This session traces the journey from Photoshop engineering to tackling global crimes against children, highlighting a career shift inspired by a law enforcement conference. It explores the partnership between Adobe and the National Center for Missing and Exploited Children, showcasing how Adobe technologies are instrumental in rescuing the missing and solving crimes.
The session will also explore the rapid advancements in Artificial Intelligence (AI) and the impact on law enforcement. We will discuss how new AI tools, such as those in Adobe Photoshop, are transforming digital media enhancement and redaction, making it easier to analyze large data sets. However, we will also address the challenges posed by AI, including the creation of deceptive materials that can be used for extortion and radicalization.
13 September (Rutherford House, RH1209/1210):
Jessica Zosa Forde (Brown University),
James Pavur (White House Presidential Innovation Fellow): Safe, Secure, and Trustworthy: AI for Global Development
The world is working towards building a better future using AI and related technologies. In this talk, we'll explore the practicalities of what that work looks like at the intersection between technologists and diplomats. The objective of the talk is to help AI practitioners understand, through examples of real-world tech diplomacy initiatives, what kinds of questions international policymakers are looking to answer and demonstrate how powerful substantive technical engagement can be for facilitating global progress.
20 September (Rutherford House, RH1209/1210): Michael Parkes (Careers and Employment, VUW): Gen AI in the Careers & Employability Space
-
Join Mike Parkes, Assistant Manager of Careers and Employment at Te Herenga Waka, for a session on the initial impact of generative AI in the employability and careers sector. As the primary contact for Graduate Talent leads nationwide, Mike has a unique perspective on how students and graduates are utilizing generative AI in their job search. He will discuss his observations on student behaviours, challenges, and outcomes when using AI for research, skills diagnostics, job matching, and creating CVs and cover letters. Additionally, Mike will share insights into how graduate recruiters are engaging with this technology, offering a practitioner’s view on its implications for the recruitment process
27 September (Rutherford House, RH1209/1210): Robby Nadler (VUW Linguistics and Applied Language Sciences): Reimagining writing in the age of AI
When prompting ChatGPT to write an abstract for a talk about how writing needs to be re-imagined in the age of generative AI, the output was:
In the rapidly evolving landscape of artificial intelligence, generative AI technologies are revolutionizing the way we create and consume written content. This talk explores the profound implications of AI on writing, emphasizing the need to re-imagine traditional practices and adapt to new paradigms. We will delve into the capabilities of generative AI, such as producing coherent and contextually relevant text, and its potential to enhance creativity, productivity, and accessibility in writing. Additionally, we will address ethical considerations, including authorship, originality, and bias, highlighting the importance of responsible AI use. By examining case studies and real-world applications, this talk aims to provide insights into how writers, educators, and content creators can harness the power of generative AI to innovate and thrive in this transformative era.
There are some nifty ideas there, and this talk will be about some of them. But such a conception also seems to fundamentally not understand writing. This is key because how are we as a society to understand what writing will become if we do not understand what writing is? Writing is a mode of communication, of thinking, of expression. It is also a cognitive anomaly that is mentally taxing and time intensive. This is to say nothing of sociocultural dimensions that define writing as they relate to concepts such as quality, genre, and narrative. That is, how humans use writing and the ways generative AI is likely to fundamentally change writing in education, professional, and social spheres cannot be monolithic. As such, this talk will explore how researchers conceptualize writing and then discuss the myriad ways AI is likely to transform how humans interact with the broad skill we call writing.
4 October (Rutherford House, RH1209/1210):
Mark Sagar (Auckland University; founder, Soul Machines): Beyond the Illusion of Life - Animating digital people
The essence of animation is creating the illusion of life, convincing the audience that a character is alive and has its own feelings and thoughts. Is it possible to bring an interactive digital character to “life” who can “think,” “feel,” have experiences, and act with volition? What is it, exactly, to think, to feel, to experience? I will discuss our biologically based approach, which involves developing a virtual nervous system analogous to our own, combining computational models of sensory, cognitive and emotional processes, language, behavior, and motor systems. These systems activate virtual muscles to animate a virtual face and body, sensing, learning, acting, and reacting in real time. The interoperation of these systems in face-to-face and shared interactions is exemplified in projects like BabyX, a simulated toddler which aims to enhance our comprehension of social learning and behavior — but it can also serve as groundwork for achieving human-like cooperation with future artificial intelligence.
11 October (Rutherford House, RH1209/1210):
Anna Brown (Massey) and
Anna Pendergrast (Antistatic Partners):
What we can learn from the Digital Council’s 2020 research about trusted and trustworthy automated decision-making in Aotearoa In 2020, the Digital Council of Aotearoa led
research about what’s needed to have the right levels of trust to harness the full societal benefits of automated decision-making. Through community workshops based around a set of scenarios, they examined people’s experiences, aspirations, hopes and challenges with automated decision-making.
In this seminar, members of the research team will discuss:
- the approach and findings of the research
- the actions to build trust in the system that have happened since, and
- how the research – which centred the voices of communities often excluded from decisions about automated decision-making – is relevant in today’s AI context.
You can find an article Anna and Kelly Pendergrast wrote in the New Republic
here.
18 October (Rutherford House, RH1209/1210): Andrew Jackson (Policy Hub, VUW): Engaging with the public sector on the effective and safe deployment of AI- the unfolding story