
Using ChatGPT in Auckland Offices: Legal Risks of Corporate Data Disclosure
Let me paint you a picture. It’s a Tuesday afternoon in a busy Auckland CBD office. A marketing manager is drafting a proposal and, wanting to speed things up, pastes a client’s financial summary into ChatGPT and asks it to “make this sound more professional.” In about thirty seconds, she has a polished paragraph. She feels productive. She has no idea she may have just created a serious legal problem for her company.
I’ve heard variations of this story more times than I can count. Since ChatGPT exploded onto the scene, New Zealand workplaces — particularly in Auckland, where the corporate sector is densely packed and fast-moving — have quietly started using AI tools as everyday shortcuts. And honestly, it makes sense. The tools are brilliant. They save time. They reduce cognitive load. But when it comes to corporate data, the convenience can come with a legal sting that most people never see coming until it’s too late.
The Privacy Act 2020: New Zealand’s Baseline You Can’t Ignore
New Zealand’s Privacy Act 2020 is the foundation of data protection law in this country, and it applies directly to how businesses handle personal information. When an Auckland employee inputs data into ChatGPT — say, a client’s name, email, financial position, or health information — that data may be transmitted to OpenAI’s servers, which are located offshore. Under Information Privacy Principle 12, organisations must take reasonable steps to ensure that personal information shared overseas receives comparable protection to what’s required under New Zealand law.
Here’s where it gets complicated. OpenAI, the company behind ChatGPT, is based in the United States. Unless your business has entered into a specific enterprise agreement with OpenAI, the standard consumer-facing or free tier of ChatGPT doesn’t come with the kind of data processing agreements that New Zealand’s Privacy Act arguably demands. If personal information about a client is sent offshore without proper safeguards, your company could be in breach — regardless of whether any harm actually occurred. The Office of the Privacy Commissioner takes a proactive stance on compliance, and ignorance of the tools your staff are using isn’t a defence.
Confidential Business Information: A Different Kind of Risk
Beyond privacy law, there’s a broader issue that many Auckland business owners haven’t fully grappled with: the confidentiality of commercial information. Think about what gets typed into ChatGPT in a typical corporate week. Strategy documents. Merger discussions. Supplier contracts. Staff performance reviews. Pending litigation details. Financial forecasts that haven’t been disclosed publicly.
When an employee pastes this kind of content into ChatGPT (outside of an enterprise plan with specific data protections), that content may be used by OpenAI to train future models. OpenAI’s terms of service have evolved over time, but historically, user inputs could be used for model improvement. Even if that’s been adjusted, the act of sending confidential data through a third-party platform without explicit contractual protections is, at a minimum, a breach of your internal confidentiality obligations — and potentially a breach of duties owed to clients, partners, and shareholders.
For publicly listed New Zealand companies or companies with ASX dual-listings, there’s an additional layer: continuous disclosure obligations. If material non-public information is shared externally — even accidentally, even with an AI — the implications can stretch into securities law territory. That’s a conversation no corporate lawyer wants to have after the fact.
Common Scenarios in Auckland Offices That Carry Real Risk
To make this more concrete, here’s a breakdown of common use cases that Auckland professionals engage in daily, alongside the legal exposure each carries:
| Use Case | Type of Data Involved | Potential Legal Risk |
|---|---|---|
| Drafting client proposals | Client financial data, personal details | Privacy Act breach, confidentiality breach |
| Summarising internal reports | Corporate strategy, personnel info | Breach of fiduciary duty, IP exposure |
| Legal document drafting | Litigation details, contract terms | Legal professional privilege issues |
| HR-related writing | Employee performance, salaries | Privacy Act breach, Employment Relations Act issues |
| Financial analysis | Non-public financial figures | Continuous disclosure obligations (listed companies) |
None of these use cases are obscure. They’re happening right now, probably in offices within a block of wherever you’re reading this. The scary part isn’t malicious intent — it’s the total absence of awareness about what’s actually being shared and with whom.
What About ChatGPT Enterprise? Is That the Solution?
A lot of businesses ask me this. If we just pay for the enterprise version, are we covered? The honest answer is: partially, and it depends heavily on how you set it up and what your obligations are.
ChatGPT Enterprise and OpenAI’s API-based products do offer stronger data protections. OpenAI states that enterprise customers’ data is not used to train models, and these plans typically come with the ability to sign a Data Processing Agreement (DPA). For New Zealand businesses, a DPA is crucial because it formalises how your data will be handled and creates contractual accountability if something goes wrong. It also goes some way toward satisfying the offshore disclosure requirements under the Privacy Act 2020.
However, a DPA is not a magic shield. Your organisation still needs to ensure that the AI is only being used for appropriate purposes, that staff are trained on what can and cannot be entered into the tool, and that your own internal policies reflect the risk profile of AI use. The enterprise tier reduces risk. It does not eliminate it.
Legal Professional Privilege and AI: A Unique Problem
For Auckland law firms and in-house legal teams, there’s a very particular concern that deserves its own spotlight: legal professional privilege. Privilege protects confidential communications between lawyers and clients from being disclosed in legal proceedings. But privilege can be waived — intentionally or accidentally — if confidential communications are shared with third parties without proper controls.
If a lawyer pastes privileged advice, client instructions, or litigation strategy into ChatGPT via a consumer interface, that could be interpreted as a voluntary disclosure to a third party, potentially waiving privilege over that information. This isn’t a theoretical risk. Courts in various jurisdictions are already beginning to grapple with these questions, and New Zealand courts will almost certainly follow as AI use in legal practice becomes more common.
Law firms operating out of Auckland — particularly those with large commercial or litigation practices — should already be drafting AI usage policies that explicitly address privilege. If you haven’t done that yet, this article is your nudge to start.
Employment Law Dimensions: What Are Your Obligations to Staff?
There’s another angle here that often gets missed: the Employment Relations Act 2000 and the obligations businesses have to their own employees around data. If a manager uses ChatGPT to help write a performance review, and pastes in specifics about an employee’s conduct, that employee’s personal information is being shared outside the organisation. The employee almost certainly didn’t consent to that. And under the Privacy Act, employees have a right to access and correct personal information held about them — including, arguably, information shared externally on their behalf.
This creates a genuinely tricky situation. Businesses need to either ensure staff data doesn’t go into AI tools, or develop incredibly clear frameworks that address consent, access, and retention. Most Auckland HR departments haven’t had this conversation yet. That gap is a liability waiting to materialise.
What Regulators and Courts Are Starting to Say
Globally, regulators are catching up fast. Italy temporarily banned ChatGPT over GDPR concerns in 2023. Canada’s Privacy Commissioner launched investigations into OpenAI. South Korea fined a major tech company for AI-related data handling failures. New Zealand’s Office of the Privacy Commissioner has signalled increasing interest in the intersection of AI and privacy, and their guidance on AI tools is evolving quickly.
New Zealand courts haven’t handed down landmark rulings on AI-related data disclosure yet, but the legal infrastructure to do so already exists. The Privacy Act 2020 already covers the scenarios described in this article. What’s missing isn’t law — it’s enforcement experience and case precedent. That won’t be missing forever.
Practical Steps Auckland Businesses Should Take Right Now
So what do you actually do about all of this? Here’s a practical list — not theoretical, not aspirational, but genuinely actionable for Auckland businesses of most sizes:
- Conduct an AI audit: Find out which AI tools your staff are currently using. You might be surprised. Many employees use personal ChatGPT accounts that your IT department has no visibility over.
- Develop an AI usage policy: This document should clearly define what data can and cannot be entered into AI tools, which platforms are approved, and what the consequences of policy breaches are.
- Engage a privacy lawyer: If your business handles significant volumes of personal information or operates in a regulated sector (financial services, healthcare, law), you need specific legal advice — not just general guidance.
- Review your client contracts: Check whether your agreements with clients contain confidentiality clauses that could be triggered by AI use. Many standard commercial agreements do.
- Train your staff: A policy that nobody has read is useless. Run a short training session — even an hour — that helps employees understand why AI data risks matter and what to watch for.
- Negotiate enterprise agreements carefully: If you’re considering an enterprise AI subscription, have a lawyer review the DPA before signing. Generic terms may not meet your obligations.
- Consider data classification: Implement a simple classification system (e.g., public, internal, confidential, restricted) and create clear rules about which categories may be used with AI tools.
The Culture Problem Nobody Wants to Talk About
Here’s the part that doesn’t make it into most legal guides: the culture problem. In Auckland’s fast-paced corporate environment, there’s immense pressure to be productive, to move quickly, and to use every tool available. ChatGPT feels like a competitive advantage. And it is — when used correctly. But in workplaces where speed is rewarded and compliance is seen as a drag, staff will default to convenience every time unless leadership actively creates a different culture.
I’ve seen businesses where the CEO uses ChatGPT to draft board papers, while simultaneously wondering why their staff don’t take data governance seriously. The tone is set at the top. If senior leaders treat AI tools as zero-risk productivity hacks, that attitude flows downward. Responsible AI use has to be genuinely valued, not just posted on an intranet page that nobody reads.
Final Thoughts: Getting Ahead of This, Not Behind It
The good news is that none of this is insurmountable. ChatGPT and similar AI tools can be used in Auckland offices legally, ethically, and effectively — but it requires intention. It requires acknowledging that these tools interact with real legal frameworks, and that those frameworks carry real consequences when ignored.
New Zealand businesses that get ahead of AI governance now will have a significant advantage when regulators inevitably tighten their expectations. Those that wait for an enforcement action or a client complaint to motivate change will find the process much more painful and much more expensive. The legal risks of corporate data disclosure through AI are not hypothetical. They are present, they are growing, and for Auckland offices that handle sensitive client and commercial information, they deserve serious attention today.
If you’re unsure where your business stands, the best first step is simply to ask the question internally: what are our people actually typing into these tools? You might find the answer prompts a very useful conversation.