AI Risk Management for Schools: How to Build a Practical Risk Matrix

Matthew Wemyss7 min read
AI Risk Management for Schools: How to Build a Practical Risk Matrix

An AI risk matrix is a structured tool that helps you evaluate the severity and likelihood of potential harms from any AI system before you deploy it. In schools, that means moving past gut feeling and giving yourself a clear, repeatable framework to weigh up what could go wrong, how serious it would be, and how likely it is to happen.

Most schools I work with are already using AI in some form. Teachers are speaking feedback into ChatGPT and getting it tidied into something structured and student-friendly. Leaders are generating policy drafts. Departments are experimenting with lesson planning tools. The question isn't whether AI is being used. It's whether anyone has systematically assessed the risks.

A risk matrix gives you the structure to make better decisions about which risks to accept, which ones to manage, and which ones are simply not worth taking.

Step 1: Define the Use Case

Start with the basics. What is the AI system actually doing? Write down:

  • The problem it's meant to solve (reducing admin, creating lesson resources, supporting decision-making)
  • The stakeholders (teachers, pupils, leaders, parents, regulators)
  • The workflow (what goes in, what comes out, how it's used)
  • The intended outcome (time saved, reduced workload, fresh ideas)

This matters because you are not rating "AI in general." You're rating the exact way you plan to use it. A teacher using ChatGPT to reformat spoken feedback carries a very different risk profile from a school using an AI system to inform setting decisions.

Step 2: Identify the Harms

Once you've defined the use case, list what could go wrong. Take teachers using ChatGPT for lesson planning and admin as an example. The harms might include:

  • Accuracy. AI can sound confident but still get things wrong. A factual error in a lesson resource reaches every student who uses it.
  • Bias and fairness. Outputs sometimes echo stereotypes if you're not careful with prompts. A history resource that defaults to a Western-centric perspective is a bias problem, even if the content is technically accurate.
  • Over-reliance. It's easy to let AI do too much, which can chip away at teacher creativity and professional judgement over time.
  • Privacy. The risk of someone pasting in pupil names, SEN details, or safeguarding notes. With free-tier tools that use your data for training, this is a serious GDPR concern.
  • Quality drift. Lessons could start to feel generic if AI content isn't adapted. Students notice. Colleagues notice.
  • Reputation. Students, parents, and even other teachers might see "AI-written lessons" as unprofessional or lazy if the communication around it is poor.

The goal at this stage is completeness, not precision. Get every plausible harm on the table. You'll score them next.

Step 3: Score Each Harm

This is where the classic 5x5 risk matrix comes in. Severity runs down the side, likelihood runs across the top. Each combination produces an overall risk level, colour-coded for quick interpretation: green (low), amber (medium), red (critical).

Using a 1 to 5 scale for both impact and likelihood, here's where I landed for the lesson planning example:

  • Accuracy errors. Impact 3, Likelihood 4. Medium risk. Needs systematic teacher review of every AI-generated resource before it reaches students.
  • Biased content. Impact 3, Likelihood 3. Medium risk. Needs structured review processes and staff awareness of how bias surfaces in AI outputs.
  • Over-reliance. Impact 3, Likelihood 3. Medium risk. Needs balance through CPD, reflection, and an expectation that AI augments rather than replaces professional planning.
  • Privacy breaches. Impact 5, Likelihood 4. Critical risk. No pupil data in ChatGPT. Ever. This needs strict policy, clear training, and regular reminders.
  • Quality drift. Impact 3, Likelihood 3. Medium risk. Mitigate through lesson observations and an expectation that AI output is a starting point, not a finished product.
  • Reputation concerns. Impact 4, Likelihood 3. Medium risk. Communicate clearly to parents and the wider community that AI supports teachers rather than replacing them.

You may score some of these differently in your own context. That's the point. The value isn't in getting the "right" numbers. It's in having the structured conversation.

Step 4: Distinguish Inherent Risk from Residual Risk

Your first scores are the inherent risks, what things look like before you do anything about them. What matters in practice is the residual risk, what remains after your safeguards are in place.

This distinction is critical. It shows governors, inspectors, and parents that you haven't just identified the risks. You've taken deliberate action to reduce them.

Critical risks (red) are deal-breakers

Privacy breaches fall squarely here. The inherent risk is sky-high, and even with controls the residual risk is too serious to allow any ambiguity. Staff need more than a policy document. They need specific examples of what not to paste into AI tools, practical training on how to anonymise data before it leaves the school network, and regular reminders that common sense alone is not sufficient.

Medium risks (amber) are manageable with structure

Accuracy, bias, over-reliance, quality drift, and reputation all land here. The inherent risk looks concerning, but the residual risk comes down significantly when you put proper processes in place: teacher review of outputs, bias awareness in CPD, lesson observations that check for quality drift, and clear communication with your community.

Low risks (green) need monitoring, not action

These start low and usually stay low with a reasonable level of professional oversight. Keep an eye on them, but don't let them consume your planning time.

Step 5: Decide What to Do

Think of your response options as a hierarchy:

  • Avoid the unacceptable. No pupil data in AI tools. Full stop.
  • Minimise where you can. Fact-check outputs, add guardrails, build review processes into workflows.
  • Remediate when errors slip through. Fix them quickly, openly, and use them as learning opportunities for staff.
  • Offset residual risk by turning mistakes into teachable moments for digital literacy, both for staff and students.

The important thing is to document the movement from inherent to residual risk. That documentation is your evidence that the school is managing AI responsibly rather than hoping for the best.

Why This Works Better Than a Blanket Policy

When you walk through a matrix like this, two things become immediately clear:

  1. Most risks are manageable with policies, training, and professional oversight. Schools are already good at managing risk. This is an extension of what you already do.
  2. Some risks are deal-breakers, especially around privacy and safeguarding. These must be addressed before you go any further.

A blanket "no AI" policy doesn't achieve either of those things. It just pushes usage underground where you can't see it, let alone manage it.

How to Run This in Your School

A risk matrix works best when it isn't just one person filling it out in isolation. Bring colleagues together: senior leaders, teaching staff, your DPO, maybe even governors. Pick a specific AI tool you're exploring, or one that's already in use, and run it through the four steps.

The end result is a clear, structured view of AI risks for that specific use case. You know what's acceptable, what needs safeguards, and what should be avoided entirely. In my experience, people are often surprised how quickly the risks become clear and the conversation shifts from anxiety to practical planning.

If you're not sure where to start, pick the AI tool your staff use most right now. That's the one with the most risk and the most to gain from a structured assessment.

A risk matrix doesn't slow AI adoption down. It gives you the confidence to move forward because you've done the thinking first.

This article supports professional discussion around AI risk management. It is not legal advice. Always follow your organisation's policies and safeguarding procedures, and consult your data protection officer or legal team if you're unsure about a specific tool or use case.


Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss AI governance training for your school.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy