Ask any maths teacher about AI and you will get the same response. "It can't even do the calculations properly, so what's the point?" It is a fair objection. And it is mostly right. But it misses a more useful question: what can AI actually do well for maths teachers, once you stop asking it to do the maths?
I sat down with Ben Ford on Teachers Talk Radio recently to unpack this. Ben has twenty-one years of experience teaching maths, has recently stepped away from his head of department role to consult with schools on AI, and had some genuinely useful things to say about where these tools fit and where they fall apart.
The 4.5% problem
At Ben's school, they ran an anonymous staff survey. Over fifty percent of teachers had tried tools like ChatGPT. But only 4.5% were actually using AI for lesson planning. That is a massive drop-off, and it matches what research is showing everywhere. People try it, don't get what they want, and put it back on the shelf.
Ben thinks it comes down to how differently these tools work compared to Google. With a search engine, you type a question, get directed somewhere, find your answer, done. With an LLM, you have to have a conversation. You have to tease out what you want. Most people haven't been shown how to do that. They ask one question, get a mediocre answer, and conclude the tool isn't for them.
Maths teachers are especially susceptible to this because their first instinct is to ask for something calculable, and the tool gets it wrong.
Why language models struggle with maths
Ben shared a specific example that nails this. He was planning a Pythagoras lesson and asked ChatGPT for three example triangles with diagrams. At first glance, they looked perfect. Then he checked the numbers. They didn't work. The model had slapped three random numbers onto triangles without verifying that a-squared plus b-squared actually equalled c-squared.
This is the fundamental issue. These are language models. They predict the next most likely token in a sequence. They are brilliant at describing how to teach Pythagoras. They can talk pedagogy all day. But ask them to do the actual calculation? That is where it falls over. They do not process numbers the way we do.
If your first experience with AI is asking it to generate worked examples and the numbers are wrong, you will rightly lose confidence. But dismissing the entire technology because of that one failure is like refusing to use a projector because it cannot mark books.
What actually works: AI as a thinking partner
Ben's framing of AI as a "thinking partner" is better than most of what I hear in workshops. He is not asking it to create entire lessons. He is bouncing ideas off it. Here is what that looks like in practice:
-
Anticipating misconceptions before a lesson. Tell the AI what topic you are covering and ask it to predict where students will get stuck. It is surprisingly good at this because it draws on patterns from huge volumes of educational text. You still need to filter through your own experience, but it gives you a starting point you might not have considered.
-
Getting alternative explanations. Every maths teacher has topics they explain brilliantly and topics where their go-to explanation just doesn't land with certain students. Ask the AI for three different ways to explain the same concept. You probably won't use its suggestions directly, but they often trigger something better.
-
Clarifying lesson goals. Run your lesson plan past the AI and ask it to push back. "What would you do differently? Is the objective clear? Am I trying to cover too much in one lesson?" This is the kind of critical friend conversation you might have with a colleague, except it is available at 10pm on a Sunday evening when you are actually doing the planning.
-
Sparking delivery ideas. Not copying what the AI suggests, but using its output as a springboard. A starter activity you hadn't considered. A different sequencing of examples. A connection to a real-world context that makes the abstract more concrete.
The tools that are getting it right
Ben flagged two tools worth knowing about.
Oak National Academy's AILA is particularly useful because it draws on pre-created, verified resources. Rather than having an AI generate images of graphs that might be slightly wrong, AILA pulls from a pre-made bank of checked materials. That sidesteps the hallucination problem entirely for visual content.
Mindjoy lets you build chatbots that guide students through problems without giving away answers. Ben tested this with his daughter during GCSE revision and found it genuinely patient and effective. The bot would walk her through the reasoning steps rather than just producing the solution, which is exactly what good maths tutoring looks like.
How to start this week
Ben's advice to any maths teacher, and I agree with it:
-
Don't go all in. Don't try to create an entire AI-led lesson. Start by getting it to critique one thing you are already planning. Tell it: "This is my lesson idea. What would you do differently? What misconceptions should I expect?" That is low risk and it uses the AI for what it is actually good at.
-
Set up the conversation with a role. Open with something like "You are an expert maths teacher with twenty years of experience." Then have a back-and-forth. If it goes in circles, close the chat and start fresh. You would do the same with a real colleague who wasn't being helpful.
-
Check the numbers. Every time. If it generates maths content, verify the calculations yourself before it goes in front of students. The pedagogy might be spot on. The actual maths might not be.
The bigger picture for subject-specific AI training
This conversation reinforced something I keep seeing across every subject: generic AI training doesn't work. A one-size-fits-all INSET session that shows English teachers how to generate essay feedback will leave the maths department cold, because the challenges are completely different.
Maths teachers need to understand why LLMs fail at calculation, so they can stop expecting the tool to do something it cannot and start using it for what it does brilliantly. That requires subject-specific training, not another afternoon of "here's how to write a prompt."
The question is not "can AI do maths?" The question is "can AI help me teach maths better?" The answer to the first is unreliable. The answer to the second is yes, if you know where to point it.
Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss subject-specific AI training for your department.

