Authenticity is the quality of being genuine. Of the thing you see matching the thing that actually exists. For most of human history, that connection was reliable enough that we didn't need to think about it. A photograph showed what the camera saw. A voice on the phone belonged to the person you dialled. A student's essay reflected what they knew and how they thought.
That connection is breaking. Not slowly. Rapidly. And schools are one of the places where the consequences will be felt earliest and most deeply.
What deepfakes actually are, and why schools should care
A deepfake is synthetic media, typically video or audio, generated by AI to imitate a real person. The technology has moved from specialist research labs to free mobile apps in under five years. A student with a smartphone can now produce a convincing video of a teacher saying something they never said, or clone a parent's voice from a thirty-second clip.
This is not a theoretical risk. In 2024, a finance worker in Hong Kong transferred $25 million after a video call with what appeared to be his company's CFO and several colleagues. Every person on the call was a deepfake. In schools, the harms are different but no less real. Deepfake pornography targeting students has already been reported in multiple countries. AI-generated voice clones have been used to impersonate parents in phone calls to schools.
The technology is improving faster than our ability to detect it. Detection tools exist, but they lag behind generation tools and produce enough false positives to undermine trust in the results. Schools cannot rely on technology alone to solve this.
The authenticity gap in student work
Deepfakes get the headlines, but the more widespread challenge is subtler. AI-generated text has created what I think of as the authenticity gap: the growing distance between what a student's work looks like and what a student actually knows.
I see this regularly. An essay arrives that is fluent, well-structured, and competently argued. The student who wrote it cannot explain the argument in paragraph three. Not because they're nervous. Because they don't recognise it as theirs. The surface quality of the work has decoupled from the understanding underneath it.
This isn't cheating in the traditional sense. Many students are using AI as a kind of co-writer without fully realising how much of the thinking they've handed over. The output looks authentic. It reads like a strong student's work. But the understanding behind it may be hollow.
When the surface looks perfect, we stop looking underneath. That's the danger.
The old signals of quality, clear structure, confident tone, accurate referencing, used to be reliable indicators that a student had genuinely engaged with the material. AI has made those signals unreliable. A perfectly structured essay is no longer proof of anything except access to a language model.
Algorithmic curation and the performance of identity
There's a third dimension to this that gets less attention in education but matters enormously for student wellbeing. Social media algorithms reward performance. They surface content that generates engagement, not content that's true. Students have been curating their online identities for years, presenting a version of themselves that gets likes rather than one that reflects who they actually are.
AI amplifies this. Filters that reshape faces in real time. Tools that rewrite captions to sound more polished. Image generators that can produce entirely fictional versions of someone's life. The boundary between "this is me" and "this is the version of me that performs well online" is becoming almost impossible to find.
For teenagers who are still forming their sense of identity, this is a genuine developmental challenge. Research from the American Psychological Association consistently links social comparison and curated self-presentation to anxiety, depression, and reduced self-esteem in adolescents. AI tools that make the curation easier and the fakery more convincing are accelerating a pattern that was already harmful.
Why detection alone won't work
The instinct in schools is to reach for detection. AI writing detectors. Deepfake identification tools. Digital forensics. The problem is that this approach is structurally flawed.
Detection tools are always reactive. They identify what has already been generated. The generation tools improve continuously, and detection lags behind. Turnitin's AI detector, the most widely used in education, has acknowledged significant false positive rates, particularly for non-native English speakers. Penalising students on the basis of a probabilistic guess is neither fair nor sustainable.
More fundamentally, detection creates an adversarial relationship between students and teachers. It frames AI use as something to be caught rather than something to be understood. That adversarial framing makes students less likely to be honest about how they're using these tools, which is the opposite of what schools need.
If your AI strategy is built on catching students, you've already lost the authenticity battle.
Building a culture of authenticity instead
What works is building a culture where authenticity is valued, practised, and visible. That takes more effort than installing a detector, but it produces something durable.
1. Teach students what deepfakes are and how they work
Most students have heard the word "deepfake" but few understand the mechanics. Running a short session where students see how face-swap technology works, how voice cloning operates, and how text generation produces confident nonsense builds a kind of critical immunity. Once you've seen how the trick is done, you're much harder to fool.
This doesn't need to be a standalone unit. Ten minutes in a PSHE lesson or tutor time, using freely available examples, is enough to shift the default from "I assume this is real" to "I check whether this is real."
2. Make the process visible, not just the product
The authenticity gap in student work exists because we assess products. We see the essay, not the thinking that produced it. Shifting assessment design so that process is visible closes that gap.
Practical approaches that work:
- Drafting logs. Students submit their planning notes, rough drafts, and revision history alongside the final piece. The process becomes part of the assessment.
- Verbal defence. After submitting written work, students explain their argument, their choices, and their reasoning in a short conversation. This is not an interrogation. It's a normal part of the workflow.
- The seam question. After any AI-assisted task, students write one sentence: "The AI did ___ and I did ___." Students who can draw that line clearly are developing genuine self-awareness. Students who cannot are telling you something important.
3. Address digital identity explicitly
Schools spend significant time on online safety. Very few spend time on online identity. These are different things. Safety is about avoiding harm from others. Identity is about understanding who you are in a world that constantly pressures you to perform.
A useful starting point: ask students to compare their real day with what they would post about their day. The gap between those two things is worth discussing. Not to shame them for curating, because everyone does it, but to make the curation conscious rather than automatic.
4. Model authenticity as adults
Students notice hypocrisy instantly. If teachers use AI-generated content without acknowledging it, if school communications are polished by AI without transparency, students learn that authenticity is something we demand from them but don't practise ourselves.
The most powerful thing a teacher can do is say, "I used AI to help me draft this, and here's what I changed and why." That single sentence teaches more about authentic AI use than any policy document.
What this means for school policy
Schools need AI policies that go beyond acceptable use. They need policies that address authenticity directly.
- Define what authentic work means in your context. Not "work produced without AI" but "work where the student can explain and defend their thinking."
- Include deepfake and synthetic media awareness in your digital citizenship programme. Students need to know these tools exist, how they work, and what the legal and ethical boundaries are.
- Review assessment design through the lens of authenticity. If an assessment can be completed entirely by AI without the student learning anything, the problem is the assessment, not the student.
- Create space for honest conversation about AI use. Students who feel safe admitting how they use AI will give you far more useful information than students who are hiding it.
Two things you can do this week
1. Run a deepfake demonstration. Show students a convincing deepfake video (several well-known examples are freely available), then walk through how it was made. Ask them: how would you verify whether this was real? That single question builds a habit that transfers across every piece of media they encounter.
2. Add the seam question to one assignment. Pick one piece of AI-assisted work this week and ask students to write, at the bottom: "The AI contributed ___ and I contributed ___." Don't grade it. Just read the responses. What you learn will tell you more about how your students are using AI than any detector ever could.
Authenticity is not about rejecting AI. It's about knowing where the machine ends and the human begins, and making sure that boundary stays visible. That's a skill your students will need long after they leave your classroom.
Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss digital literacy training for your school.


