Woah woah woah on the rorohiko: AI and Local Government in Porirua

So this post is a bit longer than typical, but it is an important issue that I think you have to spend time on. 

Councils across Aotearoa are starting to experiment with Artificial Intelligence (AI): from email drafting and data summaries to more complex uses like property reports and planning tools. 

We need to be asking in Porirua not just can we use AI, but should we, and how? As someone who sees both the practical pressures facing our council staff and the responsibility we carry to uphold public trust, I want to share some reflections.

Across Aotearoa, public anxiety about the rise of AI is both real and justified. There is a growing sense that while the technology is rapidly being adopted, the moral and legal frameworks for its use aren’t keeping up.

For instance The Office of the Privacy Commissioner reported earlier this year that nearly half of New Zealanders say they are more concerned than excited about AI. 

As I see it, there are three main issues:

First is the potential misuse of private information. If AI is trained on personal data, whether deliberately or through a misunderstanding of terms and conditions, it risks breaching privacy law and violating the trust that individuals place in our Council.

The second concern is error: AI tools, particularly language models, can make plausible-sounding mistakes that lead to very real harm. A misinterpreted report, a reworded summary, or a hallucinated reference can all have consequences when used in a legal or policy-making context.

The third concern is structural: while AI is often sold on promises of increased efficiency or productivity, those gains rarely translate into better wages, conditions or job security for the people who work within the systems being ‘optimised’. We have seen this before with other waves of automation.

This raises a foundational question for councils: are we using AI to build better services for the public, or simply chasing short-term efficiency at the expense of trust and fairness?

A council staff member I know, I’ll call her Kim (though she doesn’t work for Porirua), made a crucial point that stuck with me. She said, “AI as most people understand it right now is basically a language model. It doesn't actually think. It just remixes words based on what you tell it to do.”

This was a helpful reminder: for all the hype around “artificial intelligence”, what we are really dealing with in most cases is generative language modelling. These models are highly capable at producing fluent, convincing-sounding responses in natural language, but they do not understand meaning in the way that people do. They do not check facts against reality; they do not know whether a sentence they have written is accurate, misleading or completely made-up.

What they are good at is reformatting. Given a prompt that includes some structured or unstructured information, for example, meeting notes, policy briefings or lists of community priorities, a language model can rearrange and rewrite that information into different formats. It might produce a press release, a Q&A document, or a simplified version for a social media post. But in doing so, it does not know what it is saying; it just predicts what words should come next, based on patterns in its training data.

That distinction matters. If we are going to use AI in council settings, we have to be clear about what the tool is doing. It is not analysing; it is not verifying. It is responding to the prompt, no more, no less. The skill lies in how the human writes the prompt, and how they interpret and fact-check the result.

This becomes even more important when we start talking about language that has legal weight. Imagine a situation where AI is used to draft or summarise a document that relates to planning, zoning or property rights. A reworded phrase might look harmless, a simplified sentence or a paraphrased version of a clause, but in a legal context, even a subtle change in meaning can shift liability, create confusion or open the door to dispute.

For councils, which operate in an environment where public decision-making must be transparent, lawful and reviewable, the use of AI introduces a layer of risk that cannot be ignored. A sentence that has been rewritten by an AI might still be attributed to council in an official capacity, but if that sentence was inaccurate, who bears responsibility?

If we are not extremely careful, we could find ourselves in situations where an AI-generated sentence becomes the subject of a legal challenge. And because AI does not explain why it reworded something, or what assumptions it made, it becomes very difficult to trace the reasoning or defend the outcome. That opacity is unacceptable in a public context.

We must never allow a tool to speak with the authority of council without a human first checking, verifying and understanding the words it is putting forward.

Contrast that risk with a very different story. My friend Rochelle, who works in the education sector, has also spent time as an AI trainer. She understands how the technology works under the hood, and she is very clear-eyed about its limitations.

She told me about a teacher who was struggling with the administrative load of writing up daily “teaching stories” for each of her pupils. These are reflective records of how tamariki are learning, what progress they are making, and what next steps might be, a key part of the role of kaiako. 

The teacher was diligent and had deep knowledge of her students, but the task of turning handwritten notes into neatly formatted learning stories, in consistent language and layout, was time-consuming and exhausting. So she tried something new: she fed her own personalised notes into an AI tool and asked it to reformat them into the right structure.

She did not use AI to write anything new. She did not ask it to make stuff up. She just let the model do the legwork of turning raw notes into consistent outputs. She still reviewed each one, made corrections and ensured it reflected her voice and judgement. But it made a real difference to her workload, and the quality of information going home to whānau actually improved, because the outputs were easier to read, share and compare.

This is, in my view, an ideal use of AI. The teacher owned the data. The AI was used purely to support her mahi, not replace it. The outcome was checked by a human and improved by the process. That is what responsible AI looks like.

Now let us consider a more complex, hypothetical example from within local government. Suppose Porirua City Council wanted to explore using AI to help process Land Information Memoranda (LIMs). These are essential documents that pull together data on a particular property: zoning, flood risk, historic consents, infrastructure records and more. As anyone in the sector knows, LIMs are labour-intensive to produce, in part because the information they rely on comes from multiple systems, created in different eras, often across many decades.

In theory, AI could help speed up the process by collating and reformatting the available data. It could generate draft reports that a staff member could review and finalise. It would not be inventing new information, just summarising what council already holds. Sounds reasonable, right?

But what happens if the AI misses something? What if a key sentence in an engineering report from the 1980s is summarised incorrectly? What if it rewords a note about flood risk in a way that changes the meaning? What if a checkbox gets left unticked, or a warning is hidden in a footnote that no one sees?

The consequences could be massive. A family might purchase the property on the strength of the LIM report. If it later floods, and they discover that the council did have the information, but it was not surfaced properly in the LIM, we are now in the realm of legal liability. That whānau might lose their home. They might not be able to get insurance. They might not receive a payout for repairs. They might be stuck with a property they cannot live in, and cannot sell.

The damage done is not just financial; it is emotional, relational and deeply personal. That is the real-world cost of getting AI wrong in a high-stakes context. No amount of saved staff time is worth that kind of error. And that is why, if we are going to explore these tools, we must proceed with care, accountability and a commitment to public good.

Hutt City has already begun trialling AI tools inside council operations. In one pilot, they found that staff were saving an average of 38 minutes a day, time that would otherwise have been spent on repetitive tasks like email drafting, summarising meetings or formatting reports. That might not sound dramatic, but it adds up to nearly 20 full working days a year, per person. That time can then be redirected to deeper, more strategic or people-facing mahi. Porirua is now beginning to explore what the appropriate use of AI might look like for us, and I believe it is the right time to set clear expectations for how we will approach it.

This is not just a question of internal policy. It is a matter of public trust. As a council, our use of AI must reflect the values of our city: transparency, accountability, equity (and let’s not ignore the broader tikanga within te ao Māori that also intersect here too). I believe we should adopt a high standard, not just doing what is technically possible or legally permitted, but what is ethically grounded and socially responsible.

Fortunately, some guidance already exists. In January 2025, the government released a Public Service AI Framework which outlines five high-level principles, aligned with the OECD: human-centred values, transparency, security, accountability and inclusion. The framework also encourages agencies to assess their readiness and maturity before adopting AI tools, and to build systems that are explainable, fair and auditable.

Alongside this, the Office of the Privacy Commissioner has produced guidance on AI and generative tools that reinforces the need for privacy-by-design, meaningful human oversight and clear documentation. If a council is using AI, we must be able to show our communities how their data is protected, how decisions are made, and who is ultimately responsible.

For me, there are several bottom lines that must apply.

First, a human must always be in the loop. That means no decision that affects people’s rights, property or livelihood should ever be made solely by an algorithm. AI can assist, but it cannot replace judgement.

Second, privacy must be honoured. We should never feed personal data into external systems, especially not public or proprietary tools, without explicit consent and appropriate safeguards. This is a matter of law, but also of principle.

Third, any system we use must be open to scrutiny. That means avoiding proprietary “black box” models whose inner workings cannot be reviewed, explained or challenged. If we are making decisions using AI, then the code, the datasets and the assumptions must be auditable, by our staff, our partners and, where appropriate, the public. Sunlight, as they say, is the best disinfectant.

Finally, we must be clear about what AI is doing and what it is not doing. It is a tool for curating, summarising and formatting data, not for inventing, interpreting or deciding. If we get that wrong, the consequences could be severe. But if we get it right, AI might help us reduce the burden of bureaucracy, free up human capacity and make some parts of the system work more smoothly for the people who rely on it most.

AI might help us work smarter, but only if we choose to use it wisely.

PS: I figured that this was getting too long and I did refer to it earlier, but there are indeed some specific concerns I have as well with regards to the role of te reo me tikanga Māori that we should be mindful of.