29 Oct 2025

Nark: How (and why) we used AI to recreate a dead man's voice

1:42 pm on 29 October 2025
Archival photograph of a man (Ross Appelgren) holding a fresh caught fish in one hand and a can of beer in the other. He wears sunglasses and a blue sweatshirt and black shorts and smiles softly while looking at the camera.

Ross Appelgren (pictured) was convicted - twice - of the murder of fellow inmate Darcy Te Hira in Mt Eden Prison. Photo: Nick Monro / Julie Appelgren

One of journalism's most repeated purposes is to give voice to the voiceless. But what if the person speaking up has died while waiting to tell their story? Is it OK to use artificial intelligence to give that voice to someone trying to remove the darkest of stains from their reputation?

In RNZ's epic new podcast Nark, Mike Wesley-Smith has spent more than two years investigating the murder of Darcy Te Hira and the conviction of Ross Appelgren - twice - for the crime. Appelgren served eight years but spent the rest of his life insisting he was innocent. Before he died in 2013, he'd given interviews, signed affidavits, and even written his own memoir saying, "I did not kill Darcy Te Hira". So much evidence and information existed in written form, but not in audio. We didn't have him saying those words in his own voice.

Could we (should we) let Ross Appelgren speak for himself?

In the past, we would have used an actor to read these words and all the others Appelgren had written in his lifetime. But during the making of Nark, AI's ability to clone voices has advanced significantly, to the point where we wondered whether we might try using recordings of Appelgren's voice from TV and radio interviews to teach an AI voice generator how to speak like him.

Could we let Appelgren speak for himself? Should we? As far as we've been able to find out, this is a first. A convicted murderer has never posthumously pleaded their innocence via an AI voice.

When Wesley-Smith first tried the technology in mid-2024 feeding three minutes of Appelgren recordings into an AI tool, the result was rough and unconvincing. By early 2025 AI had advanced to the point where Appelgren's widow Julie, could hear the AI voice and say it sounded just like him. Julie thought it was a great idea to use the voice in the podcast. Appelgren did most of his radio interviews on Radio Pacific. The station's gone now but the rights to its audio are owned by Mediaworks. They kindly gave their approval. So the answer to our first question was, yes, we could do it.

Image of Mike Wesley-Smith smiling at the camera in front of a grey background.

Nark host Mike Wesley-Smith Photo: RNZ / MARK PAPALII

The second question, whether we should do it, was harder. Certainly, the issue was important. A murder conviction is as bad as it gets and the evidence uncovered by Appelgren and his lawyers during his life time had convinced two Governors-General to send the case back to the Court of Appeal. This is a live case, even though the victim and the accused are both dead. The evidence uncovered by Wesley-Smith raises even more important questions about the convictions.

The podcast is an investigation, not a campaign. It doesn't draw a conclusion; that's left to the courts. But is it not just to give Appelgren his voice in this matter, with the consent of his loved ones? We went to great lengths to track down the nark to give him a voice. To give voice to Te Hira's wife, Suzanne, who had never been asked for comment before. To let all those involved speak and be seen. Why not Appelgren? (It's worth noting that we still used actors for many other voices in Nark and will do so in other podcasts).

Image of a woman (Suzanne Young) holding a framed photograph of herself and her husband Darcy Te Hira.

Darcy Te Hira's wife Suzanne Young with a photo of the couple. Photo: RNZ / MARK PAPALII

The question went to RNZ's AI working group, which debated the request. Ultimately, they approved it. We all agreed on some strict conditions. First and foremost, we needed consent. Both Julie and Appelgren's estate signed their approval. Second, the voice had to sound convincing to those who knew Appelgren. As you've heard, Julie said it sounded just like him. Third, it could only say words he wrote or said during his lifetime. Nothing would be imagined.

How did we do it?

Remarkably, we needed only a minute of Appelgren's voice from his radio interviews. It turned out less, good quality audio is more accurate than more, lower quality audio.

Once we had a voice, we inputted the text we wanted that voice to say. Finally, we had an actor read the text aloud mimicking Appelgren's interviews, to ensure the right intonation and pace. The actor's read was uploaded to the AI tool and then played out using the AI voice. It turned out accent was the hardest thing. It took a while to stop the voice sounding English or American.

Others have gone further with AI, provoking serious questions. Famed interviewer Michael Parkinson's estate last year approved the creation of Virtually Parkinson, a podcast in which an AI replica conducted new interviews with current celebrities. Also last year, a Polish radio station broadcast an imagined interview with poet and Nobel Prize winner, Wisława Szymborska, who died in 2012. This year, the family of Chris Pelkey used AI to create a video version of him, which gave an victim impact statement to an Arizona court, even though he was shot and killed in 2021. His family wrote the words spoken by the AI video. This use of AI is more limited and discreet - only Appelgren's own words are used.

Does this use of AI give a voice to the voiceless?

In a sense. But only in a sense. The temptation is to think of this as giving Appelgren the chance to tell his own story.

But it's not that. Not quite. It's his words to the letter. Words he wanted people to hear. It's a voice that even his widow says sounds like him.

But it's not Appelgren. We bring his voice to life, but not him. We are not trying to simulate reality or pretend this voice is anything other than an impression.

We want to be very clear that this is a machine playing a role. It's a tool in the hands of a creator, not a creator itself. But hopefully this is also technology that helps listeners get closer to the story and the real lives turned upside down by an act of violence on a Sunday morning in 1985.

Get the RNZ app

for ad-free news and current affairs