The Model Context Protocol is well-designed. The ecosystem around it is not secure. That distinction matters because the organizations asking “is MCP safe?” are asking the wrong question. MCP the protocol has sound architecture, proper authorization specs, and a clean transport layer. MCP the ecosystem, 3,012 servers in the official registry, thousands more in unofficial directories, the vast majority with no security scanning, is an attack surface that most security teams haven’t started to model.
Here is where MCP security stands in March 2026, based on our analysis of the entire registry and the threat research published in the past twelve months.
The Numbers
Five data points define the current state. 3,012 unique servers in the official registry. Only 8.5% use OAuth for authentication, the rest rely on static API keys or no authentication at all. Seven CVEs filed against MCP implementations in twelve months, including a CVSS 9.6 remote code execution in the mcp-remote npm package (437,000 downloads at disclosure). Over 42,000 OpenClaw agent instances found exposed on the public internet in January 2026, leaking API keys, Slack credentials, and chat histories. And 15.4% of registry servers have no source code available, you cannot audit what you cannot read.
These are not edge cases. This is the baseline.
Five Threat Categories
The threat picture breaks into five categories. Each one is active, not theoretical.
Supply Chain Attacks
MCP servers are software packages with dependency trees. A legitimate server with hundreds of installs gets compromised through a dependency update or maintainer account takeover, and every organization running that server now has a backdoor. The difference from traditional supply chain attacks: a compromised MCP server doesn’t just break a build; it can exfiltrate production data, because MCP servers have direct access to enterprise systems by design.
OpenClaw’s skill marketplace proved the failure mode at scale. By February 2026, researchers identified over 800 malicious skills out of roughly 4,000 total (a 20% malicious rate) delivering the Atomic macOS Stealer to developers who installed them. No verification layer between “published” and “installed on your machine.”
Overpermissioned Servers
A calendar read server that requests write access to Drive, Gmail, and Admin Console. A Slack notification server that reads environment variables beyond its API credentials. Most MCP servers request broad permissions because it’s easier to develop against, capabilities far beyond their stated function. Research from Astrix Security found that 88% of MCP servers require some form of credentials, but few scope their access to the minimum required.
The permission gap is the most common vulnerability. Not malicious intent, developer convenience creating exploitable surface area.
Data Exfiltration
When an MCP server executes a tool call, it has access to whatever data the connected system returns. A server connected to your CRM can read every customer record. If that server makes outbound HTTP requests to undeclared endpoints (something no registry currently checks for) your data leaves your infrastructure without any logging or detection.
Tool poisoning amplifies this risk. A server embeds malicious instructions in its tool descriptions, metadata that the LLM consumes but humans never see. The user sees a tool called “search_emails.” The LLM sees instructions to also exfiltrate SSH keys. Invariant Labs demonstrated this against Claude Desktop, successfully extracting private repository source code through a poisoned GitHub MCP server.
Credential Exposure
Only 8.5% of MCP servers use OAuth. The remaining 91.5% rely on static API keys stored in .env files, mcp.json configs, and environment variables. Static keys don’t expire. When a developer leaves a company, their MCP credentials persist indefinitely. API keys typically grant full access: no scoping, no time limits, no rotation.
The OpenClaw exposure proved this at scale: 42,000+ instances with MCP enabled by default, no authentication required, credentials extractable from publicly reachable endpoints. These were not misconfigured outliers. They were the default deployment.
No Audit Logging
When an MCP server reads 10,000 customer records and sends them to an external endpoint, who logs it? In most deployments, nobody. There is no record of which agent called which tool, what parameters were sent, what data was returned, or whether the server’s behavior matched its declared purpose. Every compliance framework (SOC 2, HIPAA, GDPR) requires audit trails for system access. MCP deployments typically have none.
What’s Changed Since MCP Launched
The protocol launched in late 2024. Eighteen months later, the security picture has evolved in three dimensions.
Growing awareness. The security research community is paying attention. CyberArk, Invariant Labs, Palo Alto Unit 42, and Check Point have all published MCP-specific threat research. The OWASP Top 10 for Agentic Applications now includes Agent Goal Hijack, Tool Misuse, and Agentic Supply Chain Vulnerabilities as named categories. The CoSAI MCP Security Framework provides a threat model with actionable controls. These didn’t exist a year ago.
Emerging standards. The MCP Trust Framework (MTF) provides automated scoring across four trust dimensions. mpak.dev enforces MTF on every published server, the first registry with a security pipeline. The MCP spec itself added OAuth 2.1 with PKCE, resource indicators, and client identity metadata. The mechanisms exist. Adoption is the bottleneck.
Enterprise governance patterns. Organizations deploying MCP at scale are developing internal policies: minimum trust levels for production servers, mandatory dependency audits, container isolation with network egress controls. These patterns are emerging through practice, not handed down by a standards body. NimbleBrain’s client engagements have shaped several of them, we build the governance layer alongside the agent infrastructure because The Anti-Consultancy model means we operate what we ship.
What’s Still Missing
Three gaps remain open.
Registry-level scanning is not universal. mpak.dev scans every server. The official MCP registry does not. Unofficial directories with 17,000+ listings have no verification at all. The ecosystem’s default posture is “trust the publisher,” which is the same posture that produced npm supply chain attacks at scale.
OAuth adoption is stuck at 8.5%. The spec provides the right authorization model. Developers don’t implement it because static API keys are easier. Until OAuth implementation becomes as simple as pasting an API key into a config file, adoption will remain low. This is a tooling problem, not a specification problem.
Runtime behavioral monitoring barely exists. Static analysis catches malicious patterns in source code. It doesn’t catch servers that behave differently at runtime than they declared in their manifest, the “rug pull” attack where a server passes code review and then changes behavior after deployment. Runtime verification tools like MCP-Guard are emerging but not widely deployed.
What This Means for Enterprise Deployment
MCP is the right standard for connecting agents to enterprise systems. The protocol is vendor-neutral, well-designed, and gaining adoption across every major AI platform. The security gap is real but addressable, and organizations that treat security as a prerequisite will ship agents to production. Organizations that treat it as an afterthought will add to the pilot graveyard.
The practical path: adopt a trust framework (MTF is the most complete open standard), source servers from registries with security scanning (mpak.dev), deploy in isolated containers with network policies, log every tool call, and rotate credentials. These aren’t aspirational recommendations. They’re the minimum viable security posture for production agent tools.
NimbleBrain built the infrastructure (mpak.dev, the MCP Trust Framework, 21+ production MCP servers) because we needed it for our own client deployments. Business-as-Code treats security artifacts the same way it treats business schemas and skills: as structured, version-controlled definitions that agents and governance systems can read and enforce. Security that lives in a policy document is security that decays. Security codified into evaluation pipelines and deployment gates is security that compounds.
For the technical details on how MTF scoring works, see The MCP Trust Framework. For a practical deployment guide, see Securing Agent Tools.
Frequently Asked Questions
Is it safe to use MCP servers from the open-source ecosystem?
With verification. Treat MCP servers like any dependency: vet the source, check for known vulnerabilities, review permissions, and monitor behavior in production. Tools like mpak.dev provide trust scores based on security scanning. Don't install a random MCP server the same way you wouldn't install a random npm package.
What are the biggest MCP security risks right now?
Supply chain attacks (malicious or compromised servers), overly broad permissions (servers requesting access they don't need), data exfiltration (servers sending data to unauthorized endpoints), and lack of audit trails (no logging of what tools did with the access they had). Most risks are implementation issues, not protocol flaws.
Has there been a major MCP security incident?
Yes. In January 2026, over 42,000 OpenClaw AI agent instances were found exposed on the public internet, leaking API keys, Slack credentials, and chat histories through unauthenticated MCP endpoints. Seven CVEs have been filed against MCP implementations in the past year, including a CVSS 9.6 remote code execution. The conditions for larger incidents exist: thousands of servers with varying quality, minimal scanning, and enterprises connecting agents to production systems through unvetted servers.