The question Australian teachers are asking in 2026 is no longer “Are my students using AI?” They are. The question is: “What do I do about it?”
Most professional development on this topic defaults to one of two unhelpful positions: either “ban it and detect it” or “embrace it and redesign everything.” The reality for most teachers is messier than either. You have existing curriculum, existing assessment tasks, existing relationships with students, and a tool that has made the academic integrity conversation more complicated than it’s ever been.
This guide is for teachers who need practical, current guidance — not theoretical frameworks.
Can You Actually Detect AI Writing?
Yes and no. You can identify signals that suggest AI involvement. You cannot prove it definitively using a tool alone, and you should not try to.
AI detection tools like Turnitin’s AI detection, GPTZero, and Copyleaks work by calculating the statistical probability that a piece of text was generated by a language model. They are genuinely useful as a signal. They are not evidence. The distinction matters enormously when you’re having a conversation with a student or escalating to leadership.
The more valuable skill is developing your own critical instincts for what AI-generated text looks and feels like — and then using that alongside tool outputs.
The Linguistic Markers of AI-Generated Text
After reading a significant volume of AI-generated academic writing, experienced teachers and researchers have identified consistent patterns:
- Over-perfect structure. AI writing tends to follow a rigid format: brief intro, three clearly separated points, tidy conclusion. Human student writing is usually messier, more idiosyncratic, and often structurally inconsistent in ways that reflect thinking-in-process.
- Confident vagueness. AI can write at length about a topic while saying very little specific. It tends to hedge, generalise, and produce plausible-sounding claims without grounding them in particular examples.
- Absence of the personal. AI cannot access a student’s own experience, their classroom discussions, or the specific examples introduced in your teaching. Student writing that never references anything personally encountered is a flag.
- Smooth but hollow transitions. Phrases like “Furthermore,” “It is important to note that,” and “In conclusion, we can see that” are statistical favourites of language models. Human student writing tends to have clunkier but more authentic transitions.
AI Detection Tools: What Teachers Need to Know
Use them as one input, not as a verdict. The critical limitations to understand:
- False positives for ESL and neurodiverse writers. Students who write in highly structured, formal, or repetitive ways — including many English as a Second Language students and some students with autism spectrum disorder — can trigger high AI probability scores for entirely human-written work. Using a tool score as standalone evidence in a misconduct case involving these students is both educationally and legally risky.
- The paraphrase problem. A student who uses AI to generate a draft and then manually rewrites it will often produce text that detectors miss. Detection tools score the final text, not the process.
- False confidence in high scores. Even a 95% probability score from a detection tool does not constitute proof. Courts and tribunal processes have consistently rejected AI detection tool output as sole evidence of misconduct.
The AI Involvement Spectrum
A more useful framework than “AI or not AI” is thinking about student work as existing on a spectrum:
None — AI-assisted — AI-generated — AI-authored
A student who used AI to brainstorm ideas, then wrote their own essay, sits in a different place on that spectrum than a student who copy-pasted an AI output. Your response should reflect where on that spectrum the work falls, and whether the student understood what was permitted.
Assessment Design That Renders the Question Moot
The most sustainable response to AI in schools is not better detection — it’s better assessment design. Tasks that require local knowledge, personal experience, oral defence, process documentation, or very specific constraints are significantly harder to complete credibly with AI alone.
The stayahuman Educator Certification program covers both detection and assessment redesign in depth, with templates across 8 Key Learning Areas.
Having the Conversation With Your Student
Start with curiosity, not accusation. “Walk me through how you approached this task” is more productive than “Did you use ChatGPT?” If a student can’t explain their own argument or define a word they used, that’s meaningful. If they can, the conversation can become an educational moment rather than a misconduct proceeding.