A student lands on amber for hashing algorithms. The platform flags it. The dashboard updates. A recommended resource appears.
Nobody checks whether the student's confusion is about collision handling or the difference between open and closed addressing. Nobody looks at the actual work. The system just knows "hashing = amber" and serves the same remediation to every student who lands there.
That's personalisation. And it's not the same as bespoke support.
The Problem with "Personalised Learning"
Personalisation has become a marketing term. Something scaled. Automated. A dashboard promising differentiation at the click of a button.
EdTech platforms use the word constantly, and I understand why. It sounds good. It sounds like care. But in practice, personalisation usually means: the system noticed a pattern and served a pre-built response. There's no human judgement involved. No understanding of the student behind the data point.
Bespoke is different. Bespoke implies intention. A human decision. A teacher looking at one student's work, spotting a specific gap, and building something for that gap. A targeted revision scaffold. A carefully structured set of questions that addresses the exact weakness that surfaced in their last assessment.
That distinction might sound like splitting hairs. I don't think it is. The difference between "the platform recommended this" and "your teacher made this for you" is the difference between automation and teaching.
What Bespoke AI Support Actually Looks Like
Here's what this looks like in practice during assessment season. I've been using course specifications to generate assessment materials and practice questions that properly mirror the exams students are about to sit. The command words matter. The structure matters. The mark scheme logic matters. If it doesn't feel authentic, it's noise.
Alongside that, I run RAG ratings on specific skills and then build targeted resources based on what surfaces. Not a generic "revise this topic" pack. Something designed around the specific gaps that appeared in a specific student's work.
The feedback loop that drives it
The workflow follows a tight cycle:
- Assess. Run a properly structured assessment that mirrors the real exam format
- Analyse. Use RAG ratings to identify where each student actually sits, not where they think they sit
- Create. Build bespoke resources that target the specific gaps the data revealed
- Teach. Deliver those resources, then loop back to step one
At this time of year, that loop feels critical. Students don't need more content thrown at them. They need sharper, more intentional support.
Turning feedback into the next intervention
I'm still using speech-to-text heavily for feedback. I'll review a student's code, talk through it out loud, run that transcript through our enterprise Copilot (with no identifying information), and refine it into clear, actionable feedback.
But I also keep that cleaned-up feedback in Copilot and use it to generate the next set of bespoke resources.
So feedback stops being the end of the process. It becomes the starting point for the next intervention. Assessment data shapes the resource. The resource addresses the gap. The next assessment checks whether the gap has closed.
Bespoke support means the teacher decides what's relevant, what fits the student, and what aligns with the curriculum. AI suggests. The teacher selects and makes the final call. Always.
Why This Matters More Than Platform Features
The difference between bespoke and personalised isn't academic. It shows up in the quality of what students receive.
A personalised platform might identify that a student is struggling with binary floating-point conversions and serve them a video tutorial. That's fine. It's better than nothing. But it treats the symptom at a surface level.
A bespoke approach starts with the teacher asking: where exactly does this student's understanding break down? Is it the normalisation step? The mantissa/exponent split? The conversion from denary? The answer changes what you build. And AI makes building it fast enough to be practical.
What bespoke resources look like in practice
- If hashing keeps flashing red across half the class, that's a whole-class session. Not a platform recommendation.
- If floating point is amber for most but red for only three, that's targeted practice plus a small-group reteach.
- If one student is red on a niche topic and no one else is, that becomes a bespoke resource, not a 50-minute detour for everyone.
This is where AI becomes genuinely powerful. Not as the thing making the decisions, but as the thing that makes it possible for a teacher to act on decisions quickly enough for it to matter.
Coaching Teachers Into This Workflow
A significant part of my work involves helping other teachers adopt this approach. The focus is practical: how to generate exam-style questions that genuinely align with specifications, how to collect meaningful assessment data, and how to use that data to shape the next set of resources.
The biggest shift isn't technical. It's conceptual. Many teachers have been trained to think of differentiation as three worksheets at three levels. AI makes it possible to think of differentiation as thirty different responses to thirty different gaps, generated in the time it used to take to make those three worksheets.
But this only works if the teacher stays in control of the design. The moment you hand the design decisions to the platform, you've moved from bespoke back to personalised. And the quality drops.
A Governance Note Worth Remembering
If you're using tools like AI Studio to create bespoke materials, a quick governance reminder: remove AI-generated interactive elements before anything reaches students. Gemini API, for example, shouldn't be going anywhere near under-18s directly. The guardrails exist for good reason. Build with AI. Clean before you deploy.
The boundary between staff-facing AI tool and student-facing AI tool is something worth handling carefully. A bit of caution here doesn't mean resistance. It means you're thinking it through before you run.
The Principle Behind the Practice
Personalisation at scale sounds appealing. But scaled personalisation often means nobody is actually looking at the student. The platform is. The algorithm is. But nobody with professional judgement, subject knowledge, and a relationship with that young person is making the call about what they need next.
Bespoke support, powered by AI, keeps the teacher in that position. AI handles the production. The teacher handles the thinking. That's the right division of labour.
If your school is investing in AI for differentiation, ask this question: does this tool help teachers make better decisions, or does it make the decisions for them? The answer determines whether you're building capability or buying convenience.
Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss AI-enhanced assessment strategies for your school.

