AI and Society Seminars in 2023

Topic 1: AI systems in social media platforms (March-May 2023)

Seminar 1 (March 3): Introduction to the series
  • Ali Knott, Markus Luczak-Roesch (VUW)
Seminar 2 (March 10): Three AI systems impacting social media
  • Ali Knott, Harisu Shehu, Simon McCallum (VUW)
Seminar 3: (March 17): Voluntary initiatives from companies to oversee their AI systems Seminar 4 (March 24): How can we best measure the performance / impact of AI systems in social media?
  • Ali Knott, Harisu Shehu, Ian Welch (VUW)
Seminar 5 (31 March): Overseeing AI systems on social media platforms: What issues need to be considered?
  • Nicole Moreham (Faculty of Law) will introduce issues that arise in relation to free speech and competing interests.

  • Marcin Betkier (Faculty of Law) will outline some regulatory issues, and discuss some of the interests that must be taken into account in the areas of data and algorithm use.

  • Kevin Shedlock (School of Engineering and Computer Science) will survey challenges confronting Māori in the digital space, with a focus on data sovereignty, algorithm bias, and digital mātauranga.

Seminar 6 (28 April): A survey of the current regulatory landscape for AI systems in social media.
  • Rachel Wolbers (from Meta’s Oversight Board) will talk about Section 230 of the US Communications Decency Act, which is of great importance for social media platforms. She’ll give some historical context, and then review the recent proposed amendments relating to recommender systems.
  • Tom Barraclough (from the Brainbox Institute) will talk about the EU’s Digital Services Act, which came into force in November 2022. He’ll focus on what the act has to say about AI algorithms (recommender systems and content classifiers), and the open questions that still remain about how the act will be implemented.
Seminar 7 (5 May): Social media oversight in New Zealand
  • David Shanks (ex Chief Censor of New Zealand) and other guests

Topic 2: Large Language Models and Foundation Models: GPT and friends (May-November 2023)

Seminar 1 (12 May): Introduction to GPT-style language models
  • Ali Knott (VUW). This talk will introduce how GPT-style models work, without requiring any technical expertise, so as not to freak anyone out.
Seminar 2 (19 May): How to use large language models (LLMs): an introduction to 'prompt engineering'
  • Simon McCallum (VUW). This will be a practical session! Bring your computer if you like!
Seminar 3 (26 May): How are LLMs impacting education? And how should teachers and policymakers respond? Seminar 4 (21 July): LLM safety (1): 'harmful content' generation
  • Ali Knott (VUW). In this session, we will discuss (and critique) ‘guardrails’ set up to prevent harmful content, and ‘alignment methods’ that orient outputs towards acceptable content. We’ll also consider measures to ensure privacy of certain types of information in the training corpus.
  • GPT-4's alignment methods are nicely described in an OpenAI technical report: GPT-4 System Card.
Seminar 5 (28 July): LLM safety (2): hallucinations (and remedies)
  • Ali Knott (VUW). In this session, we’ll focus on a particular kind of harmful content - namely false content that is reported as fact (sometimes called ‘hallucinations’ in recent discussions). We’ll consider methods for citing sources for reported facts.
  • We’ll also discuss how AI-generated content can be detected; I’ll give some updates on some work I’m doing on this, connecting with EU legislation.
Seminar 6: (4 August): Stable dffusion models for generation of images
  • Callum Sleigh (Taylor Fry). In 2022 a new generation of image generation models made a huge leap in the state of the art and quickly gained millions of users. This talk will be a high-level overview of the ideas going into these models – with no mathematics. We will focus on an open-source example: “Stable Diffusion”. Hopefully, learning a bit about the conceptual background will help people who want to interrogate the role these models will play in society.
  • With Callum's introduction of AI methods for image generation, we have a more general target for discussion - ‘foundation models’. The term ‘foundation model’ spans both large language models (LLMs) and large image generation models. A foundation model (FM) is an AI system trained on a large dataset of items from many domains - either texts or images or both - that can also generate items from the same range of domains. The term was introduced in this paper, if you’d like a reference.
Seminar 7 (11 August): The large foundation model ecosystem
  • Simon McCallum (VUW). In this seminar we will discuss the applications and uses of LLMs and LFMs as part of other systems. We will cover how APIs are being used to plug them into other programmes. This will give an overview of LangChain and AutoGPT, the impact of adding agency to the system that has an LLM as part of its reasoning system. We will talk a bit about the open source and closed source use, the compute time for these systems and the proliferation of companies developing niche models.
Seminar 8 (18 August): The case for AI content detection
  • Rebecca Downes (School of Management, VUW) will lead a discussion-based session exploring the case for requiring reliable detection mechanisms as a pre-release condition for new foundation models. Last week Simon introduced foundation models as a key element in the flourishing AI ecosystem. Because foundation models serve as the basis for many downstream applications, they may be a powerful point of intervention. We will discuss the proposition: a central condition on release of a new state-of-the-art foundation model should be demonstration of a detection mechanism that can distinguish content produced by the foundation model from other content, with a high degree of reliability and explore possible methods for achieving this.

Seminar 9 (8 September): LLMs as Models of Human Language and Cognition?
  • Carolyn Wilshire (School of Psychology, VUW) will consider in what ways LLMs are, and are not, models of human language and cognition. She will review what we know about how humans produce language, how network models can be used to advance our understanding of human language, and what features of human language (and cognition) are “missing” in current LLMs.

Seminar 10 (15 September): Understanding GPT's Mind
  • Gina Grimshaw (School of Psychology, VUW). We will take as a given that GPT has/is a mind. But what kind of mind? Because of the way they were created, GPT and other LLMs are black boxes. In this talk, I’ll describe how researchers are using the tools of cognitive psychology to understand how GPT thinks. It is surprisingly both smarter and dumber than we might think. Its strengths (and weaknesses) tell us something both about its own mind, and the human minds that created the language at its core.
Seminar 11 (22 September): Generative AI in the workplace: Now what?!
  • Prof James Maclaurin (Director of Centre for AI and Public Policy at the University of Otago) In a previous era (2021) Gavaghan, Knott and Maclaurin published The Impact of Artificial Intelligence on Jobs and Work in New Zealand. The development, deployment and perception of artificial intelligence has changed radically in the intervening two years. What does that change mean for jobs and work in Aotearoa? For reasons that will be explained, this talk will not dabble in the estimation of how many jobs will be disrupted by AI in the near future. It will instead focus on recent changes in the technological, ethical, regulatory, and commercial environment in which AI is being developed and deployed. It will finish by asking how the solutions we proposed in 2021 will hold up in the brave new world of multi-modal generative AI.
Seminar 12 (6 October): AI, IP and Indigenous Rights
  • We're very pleased to welcome Lynell Tuffery Huria, who’s Managing Partner at Kāhui Legal, and a leading expert in Māori and indigenous intellectual property. Her talk topic: as AI enables individuals to scrape content from the internet, how are we going to make sure our taonga including te reo Māori and mātauranga are protected and not misused or misappropriated?

Seminar 13 (13 October): Hate speech classification in Wellington and India
  • Today we’ll hear about two projects on hate speech classification. They both focus on the hate speech classifiers deployed by social media platforms. At present, the public don’t know much about how these classifiers are trained, or how well they perform. The motivation behind both projects is to bring the process of training hate speech classifiers into a more public arena - specifically, to have members of the public involved in building the training sets for these classifiers, and thus in framing the definitions of hate speech.
    • Tapabrata Rohan Chakraborty (visiting from UCL and the Turing Institute) will describe a project building a training set for hate speech in political discussions in India.
    • Matthew Edmundson (an honours student at Vic) will describe a project building a training set for hate speech targeted at the Rainbow community, here in Wellington.
    • Ali Knott (who's involved with both projects) will say something at the start about the motivation behind these projects.
Seminar 14 (20 October): Human perceptions of AI minds(?)
  • Dr David Carmel (School of Psychology, VUW). In 2021, a Google engineer (who really should have known better) concluded that the chatbot he’d been working on was sentient. No one else thought so (and he ended up getting fired). But the idea that AI might eventually become sufficiently sophisticated to not only simulate a conscious mind but to actually have one is widespread. The thing is, we don’t know what makes anything – including our own brains – conscious; so we have no way of knowing when and how AI might meet whatever conditions this requires. People do have a strong tendency to anthropomorphise, though, so in practical terms, the question boils down to how humans decide what to accept as evidence for the presence of intelligence/sentience/consciousness. In this talk I'll delve into the psychology of mind perception, and what our current understanding implies about how we attribute aspects of mind to AI.
Seminar 15 (27 October): AI and Culture
  • Fictional History in Words and Images (Paddy Twigg, VUW School of Languages and Cultures). Paddy Twigg transitioned from general practice medicine a few years ago and is now a PhD candidate in the School of Languages and Cultures. He is looking at the question of quality assessment of literary translation, focusing on the works of the Italian writer Italo Calvino. Out of this, when preparing a presentation on one of Calvino's novels, he discovered Dall-E.

  • AI is a Myth (Geoff Stahl, VUW School of Media and Communication). In this discussion, I want to draw on the work of selected cultural archaeologists and media historians to situate AI in relation to narratives that, in both the distant and recent past, tend to emerge around new technologies. As part of this, I will discuss the uncanny, creepiness, the technological sublime, affective intensities and algorithmic agency, referring to AI in the context of social media platforms (i.e. TikTok, Instagram and Spotify), where these dimensions play out in intriguing and fraught ways.