Your AI agents have access to your CRM, your customer database, your payment system, and your production infrastructure. The MCP servers connecting those agents to your systems run code you didn’t write. The question every security team should be asking: can you read that code? If you’re running proprietary agent infrastructure, the answer is no. You’re trusting a vendor’s claims about what their code does with your data. Open source eliminates that trust requirement. You can read every line. You can audit every connection. You can verify every permission. For AI infrastructure, where agents operate with broad system access, open source isn’t a philosophical preference. It’s a security requirement.

Advantage 1: Inspectability

The first and most fundamental advantage of open source for AI infrastructure is visibility. When an MCP server connects your AI agent to Salesforce, you need to know exactly what data it reads, what it writes, and what it sends where. With open-source servers, you can read the code. You can trace every API call. You can verify that the server does exactly what it claims and nothing more.

Proprietary agent tools ask you to trust the vendor. They’ll show you documentation. They’ll give you compliance certifications. They’ll point you to their SOC 2 report. None of that tells you what the code actually does. Documentation describes intent. Code describes reality. When those diverge (and they always diverge), you want to be reading the code.

This isn’t theoretical paranoia. MCP servers operate with significant privileges. They read customer records, trigger workflows, modify data, and interact with external APIs. The MCP Trust Framework (MTF) at mpaktrust.org exists because this problem is real: most MCP servers in the wild have no security review, no permission scoping, and no audit trail. Open source is the foundation that makes security auditing possible. You can’t audit what you can’t see.

NimbleBrain publishes its MCP servers as open source for exactly this reason. Every server on mpak.dev ships with source code and MTF trust scores. When a client’s security team asks “what does this server do with our data?”, the answer is “read the code.” That conversation takes 30 minutes. With a proprietary vendor, it takes 30 days of procurement review and you still don’t get source access.

Advantage 2: Forkability

The second advantage is control over your own destiny. When you depend on a proprietary platform and the vendor changes direction (deprecates a feature, raises prices, gets acquired, pivots to a different market), you have two options: accept it or start over. Neither is good.

Open source gives you a third option: fork and maintain. If the upstream project removes a feature you depend on, you keep it. If the project stagnates, you continue development. If the maintainer makes architectural decisions you disagree with, you take a different path. Your infrastructure doesn’t depend on anyone else’s business decisions.

The Windsurf incident in 2025 made this concrete. When Anthropic restricted API access, over a million developers lost functionality overnight. Windsurf had built their entire product on a single proprietary dependency with no fork option, no fallback, and no path forward. The developers who built on open-source tooling (open models, open protocols, open infrastructure) had options. They could switch providers, modify their stack, or adapt without waiting for permission from a vendor that had already decided their fate.

Forkability isn’t about actually forking projects constantly. Most of the time, the upstream project works fine. Forkability is insurance. It’s the knowledge that if things change (and in AI, things change fast), you have options. That insurance costs nothing with open source. With proprietary tools, you don’t even have the option to buy it.

Advantage 3: Community Maintenance

A proprietary vendor has a fixed team working on their product. Their roadmap serves their business priorities, which may or may not align with yours. Feature requests go into a backlog. Bug fixes ship when the vendor prioritizes them. If you’re not their biggest customer, your edge case stays broken.

Open-source projects operate differently. Thousands of contributors encounter thousands of edge cases across thousands of different environments. When someone hits a bug in a popular MCP server, they can fix it themselves and contribute the fix back. When someone needs a feature the maintainer hasn’t prioritized, they can build it. The project gets better because every user is a potential contributor.

This matters for AI infrastructure because the integration surface is vast. Every organization runs a different combination of CRM, ERP, communication tools, databases, and custom systems. No single vendor can test against every combination. But an open-source community collectively covers that surface area. The MCP server that connects to HubSpot gets tested by every organization using HubSpot. The one that connects to Jira gets tested by every team running Jira. Bugs surface faster. Fixes land faster. Quality improves because the testing base is the entire user population.

mpak.dev, the MCP bundle registry, is built on this model. MCP servers are contributed by the community, scanned for security, scored by the MTF trust framework, and improved by anyone who uses them. A single vendor building 200 MCP servers would produce shallow integrations maintained by a stretched team. A community building 200 MCP servers produces deep integrations maintained by the people who actually use them daily.

Advantage 4: No Vendor Lock-In

The fourth advantage is economic and strategic freedom. Proprietary AI platforms create lock-in by design. Your data lives in their format. Your configurations use their schema. Your workflows depend on their APIs. Switching costs grow with every month of usage until migration becomes prohibitively expensive. The vendor knows this. Their pricing reflects it.

Open-source infrastructure eliminates structural lock-in. MCP is an open protocol, and any compliant server works with any compliant client. Business-as-Code artifacts (schemas, skills, context files) are plain text stored in your git repository. MCP servers are containers that run on any Kubernetes cluster. Nothing about the architecture ties you to a single vendor’s platform.

This is the principle behind The Anti-Consultancy model: optimize for client independence, not vendor dependency. NimbleBrain designs every engagement around the question “can the client run this without us?” If the answer is no, the architecture is wrong. Open source is what makes that answer yes. When your agent infrastructure is built on open protocols, open tools, and open formats, you can hire any team to maintain it. You can switch any component. You can evolve at your own pace.

The Upjack framework at upjack.dev is built as open source because declarative AI app definitions shouldn’t be trapped in a proprietary platform. mpak.dev is an open registry because MCP server distribution shouldn’t require a vendor relationship. The MTF trust framework is an open standard because security evaluation shouldn’t be controlled by the entity being evaluated.

The Counterargument and Why It Fails

The case against open source for AI infrastructure usually centers on three claims: it’s harder to set up, support is less reliable, and enterprises need a vendor to call when things break.

The first claim was true a decade ago. It’s not true now. Helm charts, container registries, and infrastructure-as-code have made open-source deployment as straightforward as clicking a vendor’s “install” button, and more reproducible. The second claim confuses paid support with good support. Many open-source projects offer commercial support. The difference is you’re paying for support, not for hostage release. The third claim is the most telling: enterprises want someone to call because their proprietary vendor makes the system opaque enough that they can’t debug it themselves. With open source, your team can debug it. They can read the code. They can trace the issue. They don’t need to file a support ticket and wait.

Building on Open Source

The case for open source in AI infrastructure is the same case that won in operating systems (Linux), web servers (Apache, nginx), container orchestration (Kubernetes), and databases (PostgreSQL). Every generation of infrastructure starts with proprietary vendors promising ease and ends with open source winning on transparency, flexibility, and long-term cost.

AI infrastructure is following the same pattern. The organizations building on open protocols, open tools, and open standards today will have the flexibility to adapt as the market shifts, and it will shift fast. The organizations locked into proprietary platforms will discover the cost of that lock-in at the worst possible moment: when they need to move and can’t.

Escape Velocity, the point where your AI operations sustain and improve themselves, requires infrastructure you control. Open source is how you get there.

Frequently Asked Questions

Does open source mean less secure?

The opposite. Open-source code is inspectable. You can audit every line that touches your data. Proprietary AI tools are black boxes where you trust the vendor's claims. Security through obscurity failed in the 1990s for web apps. It'll fail for AI agents too. Open source + security scanning (like MTF) is the strongest posture.

Is open-source AI infrastructure production-ready?

Much of it is. MCP itself is an open protocol. The Linux Foundation hosts major AI infrastructure projects. mpak.dev hosts production-grade MCP servers with security scanning. The question isn't whether open source is ready; it's whether you're evaluating it with the right criteria (MTF trust scores, maintenance history, community health).

How does NimbleBrain use open source?

Everything we can open-source, we do. Our MCP servers are open source. The MCP Trust Framework is open source. mpak.dev is an open registry. Our Upjack framework is open source. We keep proprietary only what must be: client-specific implementations and the NimbleBrain Platform's operational layer.

Mat GoldsboroughMat Goldsborough·Founder & CEO, NimbleBrain

Ready to put AI agents
to work?

Or email directly: hello@nimblebrain.ai