The AUT Journalism, Media and Democracy (JMAD) research centre has published its first report examining AI use in New Zealand newsrooms. The report AI, journalism and news media in Aotearoa New Zealand, is authored by JMAD co-director, Dr Merja Myllylahti.
The report finds that AI tools are commonly used in newsrooms for many aspects of news production, but it is likely news audiences are unaware of when and how AI is used to assist in news production.
AI has its benefits. It can help newsrooms to widen news coverage or speed up their coverage. However, the usage of AI carries multiple risks. For example, news organisations don’t disclose all AI usage, and that can erode trust, says Myllylahti.
“News organisations don’t disclose how exactly AI is used in all content production: many editors feel that “the ship has sailed” in terms of tagging or labelling AI content.
“It is clear that newsrooms in New Zealand have rapidly adopted a range of AI tools. While AI might be creating efficiencies in news production, reliance on AI tools creates a systemic risk for the media. If the AI bubble bursts, as many are predicting it will, or AI platforms mothball products or change their offerings, New Zealand’s news businesses will be exposed.”
AI is used to assist in searching and researching for news stories, summarising stories of large documents, transcribing interviews, checking spelling and grammar, and generating audio/video from text or vice versa.
AI is also used to write or to assist in writing news. For example, NZME’s BusinessDesk uses AI to write full articles from the NZX stock market releases. Additionally, AI is used to tailor content on the NZ Herald homepage.
Stuff allows AI to write the first draft of an article that comes from a single source, such as fire services or police, but the stories are checked by humans prior to publishing.
New Zealand’s main news organisations, including NZME, Stuff, RNZ and TVNZ, have AI principles and ethics that emphasise human oversight and transparency of AI-created content. Myllylahti says it would be more transparent for news audiences to have the AI principles and ethics prominently linked from the news home page.
“Ethics and principles are important, but the gap is for audiences is understanding how AI is actually being used, rather than how the organisation is approaching AI from a policy point of view.”
AI copyright protection missing from election year policies
While New Zealand newsrooms increasingly use AI tools, big tech companies use their content and data to train AI models without any compensation. The report finds that in contrast to Australia, New Zealand lacks in AI regulation and copyright protection.
In Australia, the government has decided that big tech companies cannot have a copyright exemption. This means that AI companies cannot mine content creators’ data and content for free.
In 2026, New Zealand will hold a General Election, but none of the parties have (yet) made AI a central topic for their campaigns. Myllylahti says that it is time for politicians to protect the rights of the country’s content creators
“So far, New Zealand has failed to regulate big tech companies. We do have legislation in place to protect our content creators’ copyright. It is time to make sure the law is enforced properly.”