The question is no longer whether ChatGPT is in your classroom. It is. The only question is whether you have a framework for responding to that reality — or whether you’re improvising one student by student, incident by incident.
This guide is for Australian educators who want current, practical guidance on ChatGPT and AI tools in 2026 — not the theoretical frameworks that arrived before the tools did, and not the panic-driven responses that treat every AI interaction as misconduct.
The Reality Check: AI Is Already in Your Classroom
A 2025 survey of Australian secondary school students found that over 60% had used generative AI tools for school tasks at some point. Among Year 11 and 12 students, the figure was higher. The tools are free, accessible, powerful, and used on personal devices outside school hours where no policy applies.
Any approach that rests on the assumption that students can be kept from accessing these tools is not a strategy. It is a wish. The effective approaches are ones that acknowledge the tools exist and build student capability to use them responsibly.
What Australian Policy Actually Says in 2026
The Federal Framework for Generative AI in Schools sets broad principles: transparency, critical evaluation, and academic integrity. It does not prescribe specific rules about which uses are permitted, because those decisions are appropriately left to states, systems, and individual schools.
In NSW, DET guidance positions AI as a tool that can support learning when used transparently and with teacher guidance. NESA’s position on academic integrity focuses on genuine student performance: assessments should demonstrate the student’s own understanding, regardless of what tools they used in preparation. The key question is whether the student actually knows what they submitted.
Victoria, Queensland, and South Australia have similar principles-based approaches. None of the major systems have blanket prohibitions, though individual schools do.
The Academic Integrity Conversation You Need to Have
Most academic integrity issues with AI arise not from deliberate cheating but from students who didn’t understand what the rules were, or who used AI in ways they believed were acceptable without checking. The first conversation every class needs is an explicit one:
- What AI use is permitted in this class and for which tasks?
- When does AI use become academic misconduct?
- What does “your own work” mean when AI tools are available?
- What happens if you submit work that isn’t genuinely yours?
Having this conversation explicitly and early — and revisiting it at the start of each assessment task — dramatically reduces the incidence of misunderstanding-based misconduct.
Designing for AI: Assessment in 2026
Assessment design is where most teachers have the most leverage. Tasks that require local knowledge, personal experience, oral defence, process documentation, or very specific constraints are significantly harder to complete credibly with AI.
Equally valuable are AI-integrated assessments: tasks where AI use is explicitly permitted and the assessment evaluates the student’s ability to critically engage with AI outputs. “Use AI to draft the counter-argument to your position, then respond to its three strongest points” produces learning that AI cannot shortcut.
Building a Classroom AI Policy
A classroom AI policy doesn’t need to be a lengthy document. It needs to answer four questions clearly: What tools are permitted? For which tasks? With what disclosure requirements? And what happens if the policy is breached?
The stayahuman Educator Certification program covers classroom AI policy construction in depth, including templates for NSW, VIC, and QLD curriculum contexts and worked examples across multiple KLAs.
The Bigger Picture
ChatGPT is the most visible instance of a much larger shift. The tools will continue to improve. The tasks they can perform will expand. And the students currently in your classroom will spend their professional lives working alongside AI systems significantly more capable than anything available today.
The most important thing schools can do is not police AI use — it is develop students who can think critically, evaluate sources, maintain genuine understanding, and use AI as a tool that extends their capability rather than replacing it. That is the stayahuman argument. And it is, increasingly, the educational consensus.