An AI use policy is now a basic governance document for Australian businesses, in the same category as a social media policy or a data handling policy. Not having one is not a neutral position — it is a decision to let employees make individual judgements about AI use without any organisational framework, which creates inconsistent practice, unclear accountability, and genuine legal exposure.
Most Australian businesses don’t have one yet. That creates a window to get ahead of the issue, rather than responding to it after something goes wrong.
Why Your Business Needs an AI Use Policy Now
Three specific risks make an AI policy urgent:
- Confidentiality. Consumer AI tools like ChatGPT, Claude, and Gemini process and potentially store input data. An employee who pastes a client contract, medical record, financial model, or strategic plan into one of these tools may be breaching confidentiality obligations, client agreements, and in some cases, regulatory requirements.
- Accuracy and liability. AI tools hallucinate — they produce confident, plausible-sounding outputs that are sometimes entirely wrong. An organisation that publishes AI-generated content, provides AI-generated advice, or makes AI-assisted decisions without verification carries the liability for those outputs.
- Attribution and disclosure. Increasingly, clients, regulators, and the public expect to know when AI was involved in producing a document, recommendation, or communication. The absence of disclosure where it is expected is becoming a reputational and regulatory risk.
What a Good AI Use Policy Covers
A well-structured AI use policy addresses five core areas:
- Permitted tools. Which AI tools are approved for use, and for what purposes? Which tools are not permitted, and why?
- Data handling. What types of information can be input into AI tools? What is absolutely prohibited?
- Verification requirements. What checking is required before AI-generated content is published, provided to clients, or used in decision-making?
- Disclosure obligations. When and how must AI involvement be disclosed internally and externally?
- Accountability. Who is responsible for the output of AI-assisted work, and what are the consequences of policy breach?
The 5 Questions Every AI Policy Must Answer
If your policy cannot answer these five questions clearly, it is not yet a functional document:
- Can an employee input client data into ChatGPT? (If the answer is ambiguous, that is a problem.)
- If an AI tool produces incorrect output that gets published, who is accountable?
- When does the organisation need to tell clients or the public that AI was involved?
- What happens if an employee breaches the policy?
- How often is the policy reviewed, and who owns it?
The stayahuman Policy Sprint
The stayahuman corporate seminar includes a live Policy Sprint component where your team drafts your AI Use Policy during the session. Rather than a consultant producing a document in isolation, this process creates genuine buy-in: the people who will live with the policy have shaped it themselves, understand the reasoning behind it, and can explain it to colleagues.
The Policy Sprint is structured around the five core areas above, with industry-specific prompts and a template that functions as a starting point. Most teams produce a working first draft in 30–45 minutes that is ready for legal review and adaptation.
Common Mistakes in AI Policies
The most common failure mode is a policy that prohibits too broadly (“no AI use without prior approval”) without a practical approval pathway, which results in employees ignoring it entirely. The second most common failure is a policy that is too vague to change behaviour. The third is a policy that is written once and never updated, in a technology space where the tools and risks change monthly.
A good AI policy is a living document, reviewed at least twice a year, with a named owner and a clear process for flagging new tools or situations it doesn’t cover.