Directors of Intelligence: Why AI Needs Human Direction, Not Competition

Matthew Wemyss8 min read
Directors of Intelligence: Why AI Needs Human Direction, Not Competition

A student submits a polished essay. The structure is clean, the argument is coherent, the grammar is flawless. You ask them about their reasoning and they hesitate. They cannot explain the choices because they did not make any. The AI made them. The student just pressed submit.

This is not a technology problem. It is a direction problem. And it is one that Ruth Noller's creativity equation, written decades before anyone had heard of ChatGPT, already anticipated.

What is the Directors of Intelligence framework?

Directors of Intelligence is a framework I've developed for schools navigating AI. The core idea: students don't need to compete with AI. They need to direct it. The framework draws on Ruth Noller's creativity equation and reframes AI literacy around human judgement, creative agency, and deliberate attitude.

It starts from a simple observation. AI can now handle knowledge retrieval, idea generation, and even evaluation at speed. If those were the only things that mattered, we'd be in trouble. But Noller's equation tells us they aren't.

Noller's creativity equation and why it still matters

Ruth Noller's formula is deceptively simple:

C = f_a(K, I, E)

  • C is creativity
  • K is knowledge
  • I is imagination
  • E is evaluation
  • f_a means creativity is a function of those three elements, driven by attitude

For years, this was a useful way of explaining why rote learning felt hollow and why telling students to "be creative" without any judgement didn't really work. It lived quietly in education theory. Sensible. Agreeable. Easy to forget.

Then AI forced a much harder question: what happens if a machine can do all of that better than we can?

Knowledge is everywhere now. Imagination used to feel like our last human stronghold, but a model can generate a hundred ideas before a student has even finished reading the task. Even evaluation is being squeezed. Systems can critique, revise, and explain in ways that sound increasingly convincing.

So we end up in an awkward place. If the machine can know, generate, and polish, what exactly is the learner meant to be doing?

Why attitude sits outside the brackets

There is a reason Noller placed attitude outside the brackets. If you multiply everything by zero, the whole thing collapses. It doesn't matter how much knowledge is present or how imaginative the outputs appear. If the attitude is passive, if the stance is "just get it done," the creativity is effectively zero.

What you're left with is a polished artefact and very little growth behind it.

This is the insight that makes the Directors of Intelligence framework click. AI hasn't broken the creativity equation. It has clarified it. Knowledge is cheap. Execution is accelerating. Direction is scarce.

When AI stops responding and starts acting

The urgency grows once AI stops being something you talk to and starts acting in the world.

Agentic AI systems don't just respond. You give them access and a goal, and they plan, execute, check, and loop. On the surface, this looks like initiative. Until you notice what's still missing. These systems don't decide what matters. They don't know whether something is wise, safe, or appropriate unless someone has already framed that judgement.

Google, OpenAI, Anthropic, Meta, and dozens of startups are all building agent systems where AI doesn't just respond but acts, autonomously, in chains of delegation that most adults don't fully understand, let alone students.

These systems don't actually imagine. They produce. They don't wonder about anything. They don't care.

These systems don't carry responsibility. We do.

Three concepts from AI delegation research that should concern every school leader

A recent Google DeepMind paper, Intelligent AI Delegation (Tomasev, Franklin and Osindero, 2026), was written for AI systems. But reading it, I couldn't stop seeing my students. Three concepts hit hardest.

The zone of indifference. A range of instructions executed without critical deliberation. In students, it's the gap between receiving an AI output and submitting it without scrutiny. When a student accepts every output without question, they aren't directing. They're routing.

The authority gradient. When one party is perceived as far more capable, the other stops pushing back. In aviation, steep authority gradients between captains and first officers have contributed to crashes. In classrooms, the same dynamic plays out between students and AI. They defer to fluency, not because they're lazy, but because they have no practised basis for questioning it.

De-skilling. The more you automate routine work, the less prepared the human is to intervene when things go wrong. If students never write a first draft, they cannot judge whether the AI's draft serves the purpose.

What Directors of Intelligence actually looks like

Directors of Intelligence don't compete with AI. They set direction. They define values. They judge quality. They decide when to stop. They understand enough about the tools to use them well, and enough about the world to know where they shouldn't be used at all.

A student who has never learned to direct their own thinking will not suddenly learn to direct a machine's thinking. A student who can't break a problem down for themselves won't be able to break one down for an AI. A student who doesn't have the habit of questioning what they read won't question what an AI generates.

Agency comes before agents.

This is what makes the Directors of Intelligence idea bigger than an AI strategy. It's about what school is actually for. We aren't just preparing students to use tools well. We're preparing them to think for themselves in a world that is making it increasingly easy not to.

The Directors of Intelligence audit

Five questions for your leadership team. Score each 1 to 5. If your total is below 15, you're probably designing for compliance, not direction.

1. Decomposition. Can students break a complex problem into parts and decide which parts need their thinking, which could benefit from collaboration, and which might be handed to a tool? This is a thinking skill first. AI just raises the stakes.

2. Delegation with intent. When students hand work over, to a peer, a process, or an AI, can they explain what they're delegating, why, and what they're deliberately keeping for themselves? If they can't articulate that, they aren't delegating. They're abdicating.

3. Monitoring and verification. Are students in the habit of checking outputs against their own judgement, whether that output comes from a group discussion, a textbook, or a language model? Fluency is not accuracy. Students need the reflex to question what sounds right.

4. Ownership of outcome. If you asked a student "Why did you make this choice?", would they answer with conviction, or point to whatever source made the choice for them? Directed thinkers own their reasoning. That matters long before AI enters the picture, and even more once it does.

5. Deliberate human practice. Are there tasks in your curriculum where students must do the thinking themselves, not because tools are unavailable, but because doing the work is how judgement is built?

Three things you can do this term

1. Replace "What did you find?" with "What did you decide?" Focus assessment on choices rather than outputs. When students present work, begin with their reasoning. This makes direction visible.

2. Introduce the delegation brief. Before any assisted task, create a short brief: what is being delegated, why, what success looks like, and what they'll judge themselves. Teach this with the same care as essay writing. If they can't frame the brief, they aren't directing the work.

3. Protect struggle time. Some tasks must remain deliberately hard, not because help is unavailable, but because doing the work is how students build the judgement to know when help is good enough. If students never draft from scratch, they can't assess a draft. If they never sit with a problem, they won't spot a weak solution, whatever its source.

The real question for 2026

The question isn't "How do we use this tool?" It's "Who are we helping young people become in a world where intelligence is a utility?"

Access is not wisdom. Power is not judgement. And tools will never teach purpose. That part is still our job.

We can't take it for granted any more. Not when a machine will happily do the thinking for you, fluently, confidently, and without ever asking whether you learned anything in the process.

Run the audit with your leadership team this week. It takes 15 minutes. Score honestly. Try the three actions this term. See what changes.


References

  • Noller, R.B. (1977). Scratching the Surface of Creative Problem Solving: A Bird's Eye View of CPS. Buffalo, NY: DOK Publishers.
  • Tomasev, N., Franklin, J. and Osindero, S. (2026). Intelligent AI Delegation. arXiv:2602.11865v1, 12 February 2026.

Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss implementing the Directors of Intelligence framework in your school.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy