Our Changing World: Putting AI to use in Aotearoa

6:38 am on 7 October 2025
Composite of Kaka, mussels and technology.

Photo: RNZ / Unsplash

Twenty-three years ago, a small, captive-bred population of kākā were brought to Zealandia sanctuary, tucked in the hills above central Wellington.

Their reintroduction proved a remarkable success and today, kākā venture far beyond the predator-proof walls.

"The population's exploded," Dr Andrew Lensen says.

More kākā, more problems.

"Kākā like to start nesting in attics or pulling people's gutters apart or tearing down your favourite rose - as well as things like accidental poisoning from lead or from bait stations," Andrew says.

The kākā are now so numerous that, combined with their ability to roam far and wide, tracking the birds by banding and resighting is becoming too difficult.

But ecologists are keen to understand how they are getting on, where they are hanging out, and what human-kākā conflicts are arising.

Enter: AI.

Follow Our Changing World on Apple, Spotify, iHeartRadio, or wherever you listen to your podcasts.

Andrew is a senior lecturer in AI at Te Herenga Waka-Victoria University of Wellington. One of several applied AI projects he has been working on involves training an AI model to use distinctive kākā features to tell one of Wellington's feathered inhabitants apart from the other.

"Things like their beak patterns, the curvature of their beak, maybe their posture, any scratches on their beaks, that sort of thing."

He and his collaborators piloted the idea using photo-booth-like feeder boxes, with a GoPro inside, to take headshots of the birds when they came to feed. Now they're scaling up, Andrew says. "We're getting more cameras out there, we're moving beyond feeder boxes to more realistic backgrounds with trees and things."

Kaka in Wellington City.

Kākā are now a common sight in Wellington City. Photo: Judi Lapsley Miller

It's just one of the many surprising ways in which AI is being used in New Zealand - even as the public remains distrustful of it.

Andrew believes that this lack of trust partly comes down to a lack of understanding about AI, how it works, and its different forms and applications.

His own research focuses on explainable AI, which aims to make machine learning systems more transparent.

Dr Andrew Lensen, senior lecturer in AI and programme director of AI at Victoria University of Wellington against a background of snowcapped mountains.

Dr Andrew Lensen, senior lecturer in AI and programme director of AI at Victoria University of Wellington Photo: RNZ / Claire Concannon

Artificial neural networks are based on the infrastructure of our own brains - the original neural network - and how information flows through it to allow us to make decisions. Data is put into this artificial network and processed through various connected nodes, before you get the answer at the other end.

Deep learning neural networks are a complex version of this artificial intelligence, with many layers of connected nodes. They can be quite good at what we ask them to do, and spit out great answers, but how they actually do this is a 'black box', Andrew says.

"It's very hard for you or me or even an expert in deep learning to look at those numbers and figure out what it's doing and how it's going from, say, being given a photo of an animal to saying, that's a bird or that's a cat."

Explainable AI uses less complex models that can be understood, or probes the more complex models to get clues as to how they work. But these simpler models may not work as well, Andrew says. "There's this trade-off, right, between models that are really high-performing and ones that we can sort of trust."

That trust is crucial when AI is used in applied research, he says. "The worst thing you can do as a computer scientist is go along and say, 'Hey, I made you this cool thing, just trust it, it's fine'."

Verian's 2024 'Internet Insights' report indicated that New Zealanders are more concerned than excited about AI. In this survey of 1000 people, 68 percent were very or extremely concerned about AI's use for malicious purposes, and 62 percent were concerned that there was insufficient regulation and law. Just under half (49 percent) were very or extremely concerned about AI's impact on society, and only a quarter of respondents said they knew a fair amount about AI.

But AI is quickly becoming inescapable. It's remaking our online landscapes and its use is increasing in the New Zealand public service and business sectors too, including at RNZ.

Could it also solve some of our trickiest questions?

AI and aquaculture

Victoria University of Wellington's Professor Bing Xue is part of a large team aiming to use data science and AI to improve the productivity of aquaculture in New Zealand.

The $13 million seven-year research project (awarded in 2019) is looking to optimise farming of green-lipped mussels, king salmon and oysters, and using various tools and means to do so.

That includes developing a detection and alerting system to let the farmer know if a mussel buoy is lost, which would eliminate labour-intensive manual checking. They've also been gathering data about the lifecycle of the mussel - from spat sourced off Ninety Mile Beach, to fully grown adults - to investigate how to make this growth more efficient.

The project combines AI with real-world knowledge: for Bing, getting out to the farms and speaking to marine biologists, technicians, engineers and farmers has been crucial. "I learned things that I would never know when I work only [with] AI in lab," she says. "If we want to solve a real-world problem, we really have to work with the experts in that particular area."

Professor Bing Xue of Victoria University of Wellington, with a mussel farm visible in the water in the background.

Dr Xue Bing at a mussel farm research site. Photo: Supplied

Modelling measles

That's also the case for Dr Fiona Callaghan, although her goal is more about preparation than productivity.

Fiona is the chief advisor for epidemiology at the Public Health Agency within the Ministry of Health. She's been working with infectious disease experts at PHF Science[ (formally ESR) to model an outbreak of measles in different regions in New Zealand.

Our last large measles outbreak was in 2019, when more than 2000 people were infected, and more than 700 people ended up in hospital.

Since then, small numbers of cases have popped up from time to time, but without resulting in a large outbreak. There's currently a number of cases reported in Northland. Measles is highly infectious, and vaccination coverage in New Zealand at the moment is too low to prevent an outbreak.

Using a digital replica of New Zealand, known as ALMA, Fiona and her colleagues ran various simulations of measle outbreaks in different regions and with different starting points. Then they analysed the data to understand more about what might help to slow or stop the spread.

Something they learned from these simulations is that it is still worthwhile to continue vaccinating, even when the disease is already spreading. That's helpful to plan the best use of resources in a real outbreak, Fiona says.

Portrait photo of Fiona Callaghan, chief advisor of epidemiology for the Ministry of Health’s Public Health Agency.

Fiona Callaghan, chief advisor of epidemiology for the Ministry of Health’s Public Health Agency. Photo: Supplied

The benefit-harm balance

Beyond applied projects, Andrew Lensen is a director of an AI consultancy, LensenMcGavin, and also researches AI's social and ethical implications. It's something he spends a lot of time thinking about, "often at 2am", he says.

Bias in decision-making, deep fakes, and the impact that American-created AI is having on our culture are all things that keep him awake at night. He and other AI experts recently wrote an open letter to all New Zealand political party leaders, asking for bipartisan support of increased regulation of AI in New Zealand.

For Andrew, this is an important step towards a healthy future for AI in New Zealand. "If we have these conversations about regulation and have these conversations about how it's used and what is okay in our culture, then [we can] put the mechanisms in place so that we can use it appropriately and maximise those benefits and minimise those harms."

Sign up to the Our Changing World monthly newsletter for episode backstories, science analysis and more.

  • Stricter rules introduced for using facial recognition tech
  • Monitoring the Makarora mohua
  • Machine learning for environmental data and needle free injections
  • Digital twins and beating hearts