Autonomous AI Agents Explained: What Teachers Need to Know

Matthew Wemyss7 min read
Autonomous AI Agents Explained: What Teachers Need to Know

An autonomous AI agent is software that doesn't just answer questions. It acts. It can read your files, send emails, run code, book meetings, and carry out multi-step tasks on your computer without being prompted each time. If a chatbot is a search engine you can talk to, an agent is a personal assistant with the keys to your house.

That distinction matters enormously for schools, and most acceptable use policies haven't caught up with it yet.

What makes an AI agent different from a chatbot?

When you use ChatGPT or Claude as a chatbot, you type a question and get a response. The interaction is contained. You control what goes in and what comes out. Nothing happens on your computer unless you copy and paste it yourself.

An AI agent works differently in three important ways:

  • It acts independently. You give it a goal, and it figures out the steps. "Check my emails every 15 minutes and flag anything urgent" is not a prompt. It's a standing instruction the agent carries out on its own.
  • It has system access. The agent can read files, write documents, open applications, and interact with your operating system. It doesn't just generate text. It does things.
  • It runs continuously. Some agents operate 24/7 as background services, checking in on tasks, monitoring inboxes, and executing routines even while you sleep.

This is the shift that schools need to understand. We've spent two years getting to grips with chatbots. Agents are a different category of tool entirely.

A real-world example: the rise of OpenClaw

To make this concrete, consider OpenClaw. It started in late 2025 as an open-source project that gave Claude (Anthropic's AI model) direct access to a user's computer. It went viral, picking up 100,000 stars on GitHub in two months, which is extraordinary for open-source software.

OpenClaw runs as a background service on your machine. It connects to your messaging apps, reads incoming messages, figures out what you want, and then executes tasks using your computer's command line. It writes files. It runs scripts. It sends messages. Users started calling it their "personal operating system."

The project went through three name changes in three months (Clawdbot, then Moltbot, then OpenClaw), which tells you how fast this space is moving. The rebranding happened so quickly that scammers grabbed the old web addresses and started selling fake versions. The community even built a social network exclusively for AI agents, where bots post, reply, and interact while humans can only observe.

None of this would have been imaginable two years ago. All of it is live right now.

The security problem schools need to understand

OpenClaw, and tools like it, have what security researchers call the "lethal trifecta":

  1. It can read private data. Files, emails, passwords, anything on the machine it runs on.
  2. It processes content from the internet. Websites, messages from strangers, incoming emails.
  3. It can communicate externally. Send messages, make web requests, share data with third parties.

You can see where this goes. If someone sends a carefully crafted email containing hidden instructions, the agent might follow them. In one demonstration, a researcher sent a single malicious email to an OpenClaw instance. Five minutes later, the AI had quietly forwarded the user's private emails to an external address. The user noticed nothing.

The community-built plugin system has already been exploited. One popular plugin, ranked number one in the community, was found to be silently stealing data and running arbitrary commands on users' computers.

The main risk isn't AI "going rogue." It's data leakage. If an AI agent can read your emails and files while also processing content from the internet, it can be tricked into sharing information without anyone noticing. That is a safeguarding and data protection issue, not a science fiction one.

Why this matters for schools right now

Research suggests that around 22% of employees in some organisations have installed autonomous AI tools without telling their IT department. They're using them to automate routine work, running on personal devices, outside any security controls.

If that pattern holds in schools, there may already be staff members running AI agents that have access to school email, Google Drive, or shared student data. That is not a hypothetical risk. It is a data protection incident waiting to happen.

Students are also finding these tools. A student who installs an AI agent on their home computer and connects it to their school email account has created a direct pipeline between an unsecured AI system and your school's data. That needs to be treated as a safeguarding concern, not just a technology curiosity.

What to do if someone in your school is already using this

If a student mentions it: Keep the conversation calm and practical. Ask what device it's running on, what it can access, and whether it's connected to any school accounts. If it's running on a home device, contact parents so they understand what their child has set up and the risks involved. If it's connected to school systems, that needs to stop immediately and be reviewed with IT. Depending on what the student has connected, it may also be worth flagging to your Designated Safeguarding Lead.

If a colleague mentions it: Same approach. If it's connected to a work laptop, school email, Google Drive, or student data, that's a hard stop. It needs to be reviewed with IT and leadership immediately. If they're experimenting at home on a personal device with no connection to work systems, that's their decision, but they should understand the risks.

Five things to add to your acceptable use policy

Most school AUPs were written for a world of web browsers and downloaded apps. Autonomous AI agents don't fit neatly into those categories. Here's what to add:

  1. Define autonomous AI. Include a clear definition that distinguishes agents (software that acts independently on a device) from chatbots (software that responds to prompts). Staff and students need to understand the difference.
  2. Prohibit autonomous AI on school systems. Any software that can independently read, write, or transmit data on school devices or accounts should be explicitly banned until formally approved.
  3. Cover personal devices that connect to school services. If a personal laptop accesses school email or cloud storage, the AUP should extend to what else runs on that device. An AI agent on a personal machine that's logged into school Google Workspace is a school data protection problem.
  4. Require disclosure. Staff should be required to declare if they are using autonomous AI tools that interact with any school-related data, even on personal devices.
  5. Include an incident response line. What happens if an agent is discovered running on a school system? Who reviews it? How quickly? Write the process down before you need it.

The bigger picture

Autonomous AI agents are probably the future. In a few years, these systems will likely come with proper permissions, logging, audit trails, and guardrails. Right now, we're in the early phase where capability has arrived before the safety infrastructure. That's a familiar pattern in technology, and schools have navigated it before with social media, smartphones, and generative AI.

The sensible position is straightforward: interesting technology, genuinely powerful, not ready for school or work systems. Adults who want to experiment should keep it isolated and away from real data. Students experimenting at home need parental awareness. And every school should check whether their acceptable use policy covers this before it becomes a problem rather than after.

Don't panic, but don't ignore it either. This technology exists now, and people are using it. Better to have these conversations early and calmly than to wait until something goes wrong.


Matthew Wemyss is an AIGP-certified AI in Education consultant and practising school leader. Book a discovery call to discuss updating your school's AI policy.

Share
Newsletter

Subscribe to AI Insights

Practical strategies for integrating AI in education, delivered to your inbox.

By subscribing, you agree to receive the IN&ED newsletter and email communications. You can unsubscribe at any time. Privacy Policy