
We’re still some way off from Artificial Intelligence (AI) achieving the singularity i.e. intelligence that surpasses humans. But it’s evolving at a far faster pace than previously anticipated. When ChatGPT burst onto the scene, it was thought AI would take decades to achieve the same level of intelligence as a human but that could now be achieved in just a few years.
Of the seven stages of AI development, we’re now on the cusp of stage four whereby AI is able to reason and deduce. Often referred to as Agentic AI, and just one step away from Artificial General Intelligence whereby the technology becomes comparable with human intelligence, Agentic AI marks a significant achievement. Why? Because for the first time it will see the technology able to make decisions and act autonomously without the need for human prompting.
Why AI needs freedom
To get to the stage where Agentic AI can act be truly independently, it first needs to lose its training wheels. Generative AI has been given distinct data sets to learn from so to become more worldly-wise, it will need to be able to call tools and retrieve data externally but many agents are currently limited in their ability to do this. They need specific business logic for each system they operate and integrate. To overcome this issue they require a standard interface.
Sound familiar? Essentially AI needs an equivalent to the API, which transformed the ability for apps and systems to communicate with one another many years ago. The solution to this problem is Model Context Protocol (MCP) which was unveiled back in November 2024.
How MCP will help
MCP is an open source framework that connects AI to other sources and tools such as databases and APIs so makes it possible for the AI to access and use other information. It comprises an MCP host such as the application or system where the AI is situated, an MCP client which acts as a connector from the host, and an MCP server which that connects to which manages the access to these external sources of information, effectively becoming middleware.
The MCP host therefore sits between the LLM and the MCP servers. When it receives a prompt, be that from a human or AI, the MCP host sends a request for available tools or information which is relayed to the Large Language Model (LLM) which then determines the tools to be called upon by the server. It’s essentially a standard way to allow AI to source a vast array of tools and data and the expectation is that over time those LLMs will in fact become Large Action Models (LAMs), able to understand and execute human commands autonomously.
The price of freedom
Yet just like the API, MCP itself may become a target for poisoning or subverting the AI agent. In fact, because the technology opens up the conduit between the agent and the wider world and due to the level of autonomy those agents are expected to have, MCP dramatically increases the risk to AI systems.
Anthropic, which developed the Claude LLM, has devised a Responsible Scaling Policy to address this risk. The Anthropic ASL system classes current LLMs as ASL-2 meaning they have access to limited information and so have limited capabilities. However, Agentic AI through MCP will be classed as level ASL-3 in that it has low level autonomous capabilities and therefore poses a higher risk of catastrophic misuse. Assigning Agentic AI ASL-3 status will see much stricter security requirements to be met, such as adversarial testing by red teams. The company is currently looking to make Claude Opus 4 ASL-3 compliant.
Why this matters to APIs
So, what does this all have to do with APIs? Just like APIs became a target, so too will MCP. We can therefore expect API security to be extended to include the MCP layer. AI agents will soon be the new masters, replacing the interactions we currently associate with mobile and web applications, and those agents will then be the key elements attackers will target to order to carry out disrupt, defraud and manipulate operations. This makes securing APIs and MCP critical to protect workflows.
Unless steps are taken to secure these channels the danger is that AI agents will turn rogue. Organisations will want to begin building their own autonomous AI agents and will need to connect those using MCP plug ins to call APIs and pull tools and data from every corner of the internet. Unless they put controls in place to monitor those requests, they risk those agents being abused and taken over. So security teams must begin to adapt for this new world order so that they are able to assess and appraise the risks associated with this infrastructure.
Securing MCP
Securing MCP servers will require robust authentication and authorisation. There are reports of MCP servers storing the access tokens required to authenticate with business systems in plaintext, for example, and MCP clients often don’t offer credential management so that API tokens are also stored in plaintext. If multiple MCP servers are being used, the problem is further magnified.
Just like APIs, MCP will operate out of sight in most instances, so putting in place solutions that can monitor for unusual activity will be a must. The traffic monitoring used to detect API attacks could be used to help govern the workflows to and from MCP servers to look for spikes in traffic or suspicious calls and patterns that could be indicative of subversion of the AI such as prompt injection attacks.
There’s no doubt that MCP holds great promise in allowing organisations to tap into external resources which will then enable them to benefit from the autonomy of agentic AI. But it’s a journey they need to embark upon with caution. Deploying MCP will require them to look at how they secure not just their LLM but their agents, servers and workflows to protect the business.





