Navigation

← Back to BlogApril 2026 · 6 min read

Your AI Agent Isn't Just Watching Anymore

The majority of AI agent tools now take direct action in the real world. Most enterprise governance frameworks haven't caught up.

The Gap Nobody Wants to Acknowledge

Enterprise governance frameworks were designed for a different kind of AI. The mental model baked into most risk policies, vendor assessments, and compliance checklists is one of a passive system: an AI that reads documents, surfaces summaries, and offers recommendations that a human then acts on. That model made sense when AI meant a dashboard widget or a search assistant. It no longer describes what is actually running inside enterprise environments.

The Model Context Protocol ecosystem has quietly changed the equation. MCP-connected agents don't just retrieve and report — they send emails, execute code, modify files, call APIs, and trigger downstream workflows. The shift happened gradually, then all at once. Governance teams that were still debating acceptable-use policies for generative text found themselves with agents that had already been granted write access to production systems.

The majority of AI agent tools now do things, not just say things.

The Numbers That Should Be Keeping Risk Teams Awake

A joint empirical study by the UK AI Security Institute and the Bank of England set out to map the MCP tool ecosystem as it actually exists — not as vendors describe it, but as it is deployed. The scale of what they found is difficult to square with the assumption that AI agents are still primarily advisory tools.

The study catalogued over 177,000 distinct MCP tools across the ecosystem. More than half of those tools are classified as action-taking: they don't read or report, they execute. Among them are hundreds of servers with direct payment execution capabilities — tools that can move money without a human in the loop. These are not edge cases or experimental deployments. They are the mainstream of what the MCP ecosystem has become.

177,000+ MCP tools catalogued in the AISI/Bank of England study

Over 1 billion total downloads across the ecosystem

More than half of all tools take direct action — they don't just read or report

Hundreds of servers with direct payment execution capabilities

The Recursive Problem Most Teams Miss

There is a structural problem with how most enterprises are approaching AI agent governance, and it runs deeper than most risk frameworks acknowledge. The tools being deployed to audit, monitor, and constrain AI agents are themselves MCP tools. They operate inside the same ecosystem, under the same protocol, and are subject to the same categories of risk they are meant to detect. Governance built on top of the thing it is governing is not governance — it is a circular dependency.

Most enterprise risk frameworks were not designed with this recursion in mind. They assume a clean separation between the system under review and the tools doing the reviewing. In the MCP ecosystem, that separation does not exist. An agent with access to a monitoring tool has access to a tool that can itself take action, read sensitive context, or be manipulated by a compromised upstream server. The audit layer is part of the attack surface.

The governance problem is recursive: the tools you use to watch your agents are the same kind of tools your agents are using — and they carry the same risks.

What Honest Reasoning Actually Requires Here

Addressing this requires more than policy updates or vendor questionnaires. It requires structured governance at the tool level — not just the agent level. The unit of risk in an MCP-connected environment is the individual tool invocation: what tool was called, by which agent, with what inputs, and what did it do. Frameworks that reason only at the agent or model level are missing the layer where most of the actual risk lives.

Four operational steps follow directly from this analysis:

  1. Inventory every MCP tool your agents can access — not just the ones you approved

  2. Classify tools by action type: read-only, write, execute, financial

  3. Apply least-privilege constraints at the tool level, not just the agent level

  4. Establish audit trails for every action-taking tool invocation

The Honest Bottom Line

The shift from passive to active AI agents is not a future risk to plan for — it has already happened. The AISI and Bank of England study is not a warning about where the ecosystem is heading; it is a description of where it already is. Over 177,000 tools, more than half of them action-taking, hundreds with direct payment execution capabilities, and over a billion downloads. The infrastructure for autonomous AI action is not being built. It is already deployed.

Governance frameworks that still treat AI as advisory are not just behind — they are reasoning about a system that no longer exists. The question for risk and IT teams is not whether to govern active AI agents, but whether their current frameworks are capable of doing so. Most are not, and the gap is widening every time a new MCP server is connected to a production environment.

The agents are already acting. The only question is whether your governance is.

Source: UK AI Security Institute & Bank of England, Empirical Study of the MCP Tool Ecosystem (2025). 177,000+ tools analysed.

Your Agents Are Already Acting. Is Your Governance?

Reasoning Services gives risk and IT teams the tools to audit, constrain, and reason about AI agent behaviour.