ChaptersEventsBlog
How is your enterprise using AI Agents? Help us benchmark security and take the survey before November 30 →

MCP Can Be RCE for You and Me

Published 11/25/2025

MCP Can Be RCE for You and Me
Written by Rich Mogull, Chief Analyst, CSA.

Before I get into the meat of this post, I want to emphasize that I am a huge fan of MCP (Model Context Protocol) servers and I believe the technology offers more than enough value to justify its use in the enterprise. But, like everything else on the planet, MCP is a double edged sword. And our job in security is to make even risky things as safe as possible.

Okay, so why the big disclaimer up front? Because I don’t want you to think this is all negative and I’m telling you to not use MCP.

Last week I had multiple conversations with security leaders on MCP use in the enterprise. They had huge concerns over MCP server proliferation internally, and it’s a bit of a different issue than other uses of generative AI. Why are they worried?

By this point pretty much anyone working in tech has heard of MCP. Stealing from the official documentatinon, “MCP (Model Context Protocol) is an open-source standard for connecting AI applications to external systems.”. And yeah, if you’re reading this you probably already knew that.

MCP diagram

LLMs are smart but dumb. They’re trained on natural language and general knowledge (depending on the model) but, on their own, they don’t have any way to take action.

That’s what MCPs are here for. They bridge the LLMs to tools by sitting in between and telling the LLM what the tool can do, and then the LLM can use whatever tool functionality is exposed by the MCP server. To over-simplify, an MCP server is like a kind of API gateway designed to self-promote capabilities and handle requests.

Here’s where things get a little thorny in terms of enterprise use:

  • MCP servers are incredibly easy to create. As in you just load up documentation and ask the LLM to create it for you. There are also a ton of pre-built MCP servers and services that are widely available, including ones built-into major platforms like Google and Microsoft Office.
  • Developers are creating a ton of MCP servers internally and run them on their local systems to interface with their own tools and services, including internal applications.
  • When you connect an LLM to an application via MCP, that LLM is now interacting directly with the application as if it was a user, within the context of the user that set up the MCP server.
  • In other words, your AI can now run arbitrary commands on internal applications via a developer’s system with the permissions of the user.

Now this isn’t full unconstrained remote code execution. The LLM/MCP is limited to whatever permissions the user has. It can do no more damage than the user. And if you have technical controls in place to only allow trusted AI usage, you still have some degree of control. Depending on the LLM, you can, for example, restrict MCP usage at your organizational account level.

There are also a range of security options when building officially supported MCP servers.

However, the problem communicated to me is that not all organizations have this good level of baseline control in place and developers are creating a high number of arbitrary MCPs for their applications and internal tools.

So what’s a CISO to do?

This is an area of active research, and the technologies are changing nearly daily, but all hope is not lost.

  • First, if you aren’t already familiar with MCP servers I strongly recommend you check out our CSA MCP Security Center maintained by Kurt Seifried. There’s a lot of great information in there including some tool examples.
  • Consolidate any coding onto enterprise-approved and managed LLMs. I mean, there’s only like 3 or so in wide use, and you can set policies to help gain visibility and reduce MCP misuse. The challenge will be this is a genuinely useful technology and you may hit difficulties in totally restricting usage. But you at least need to lay the foundation and start the conversation.
  • An LLM can only talk to an MCP server if it’s registered with the application either server-side or on the user’s system. For example, when I create a local MCP I have to add it to a json file in my home directory. These are at fixed/known locations and are thus auditable.
  • Third-party discovery and management tools are starting to appear on the market. It’s early days, but also worth watching. I highly expect these tools (which also tend to cover agents and other use cases) to become prolific. Especially once the big vendors start buying the startups and kicking in their marketing engines.

I personally despise the term “Shadow AI” as much as I’ve always hated “Shadow IT”. All we are talking about are tools our teams use to get their jobs done, and it’s only “shadow” because, for whatever reason, they picked something unofficial. And often that reason is because they didn't have an official tool or that official tool was sub-par.

But I do think MCP servers earn the “shadow” title. They can be a straight line from a web-hosted LLM into your internal apps. This opens up everything from unpredictable behavior to prompt injection attacks. We have some tools to manage the problem, I expect more to emerge, and now is the time to have the conversations and start assessing your posture.

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates