Can open standards like MCP become the universal plug-in layer for AI agents?
MCP is emerging as the universal connector for AI tools and enterprise systems. Learn how it’s reshaping interoperability and observability for agentic AI.
Why are enterprises turning to Model Context Protocol (MCP) as a standard for AI system connectivity in 2025?
The Model Context Protocol, or MCP, is gaining momentum in 2025 as an open integration standard that simplifies how large language models interact with enterprise systems. Initially introduced by Anthropic in late 2024, MCP is designed to enable any AI agent to securely access tools, APIs, or enterprise data repositories using a consistent client-server architecture. Unlike traditional point-to-point integrations that require bespoke connectors for each new use case, MCP offers a universal pattern for AI agents to discover, invoke, and receive output from third-party systems.
This is especially relevant for enterprises deploying AI agents at scale. Salesforce Inc. (NYSE: CRM) became the first major enterprise software provider to integrate native MCP support into its product stack with the June 2025 launch of Agentforce 3. That implementation now enables AI agents to connect to MCP-compliant tools from within Salesforce workflows using no-code or low-code configurations. Through this mechanism, Salesforce customers can bring external functionality—such as payments, document processing, compliance checks, or data queries—into agentic workflows with far greater speed and governance.

How does Model Context Protocol differ from legacy APIs and enterprise plug-ins used in agent orchestration?
In contrast to legacy API usage or SaaS-specific plug-ins, the Model Context Protocol acts as a neutral, protocol-based layer between AI clients and services. It allows agents built on various large language model frameworks to interface with any compatible MCP server. This creates a universal and extensible interface not tied to a specific vendor’s SDK, authentication system, or execution environment.
Salesforce’s implementation of MCP highlights this shift. Using MuleSoft’s tooling, any RESTful API within an enterprise can now be wrapped as an MCP server and securely exposed to Agentforce agents. These agents can then invoke that server to perform contextual operations such as fetching real-time inventory data, initiating a support case, or retrieving past customer interactions. The hosted infrastructure offered via Heroku further accelerates this transformation, allowing enterprises to deploy MCP servers at scale with built-in telemetry and failover. This architecture eliminates many of the barriers traditionally associated with multi-agent orchestration, such as custom middleware or opaque data flow.
What role does Model Context Protocol play in Salesforce’s broader agentic AI platform strategy?
Salesforce has embedded MCP at the foundation of its Agentforce 3 platform, making it a first-class component in its ecosystem. The decision reflects a wider industry trend toward AI-native interoperability, where agents must operate securely and autonomously across distributed systems. Salesforce’s architecture now includes Atlas for resilience, OpenTelemetry for observability, and MCP for connectivity—creating a vertically integrated stack for enterprise AI automation.
By combining MCP with the AgentExchange marketplace, Salesforce also supports dynamic discovery of partner-built agent actions and templates. At launch, more than 30 MCP-enabled partners—including AWS, Stripe, Box, Google Cloud, PayPal, and WRITER—are offering plug-and-play access to their systems. For example, an AI agent can fetch the latest signed NDA from Box, summarize it using WRITER’s LLM, and route next steps to a Slack channel—all in a governed and traceable manner. Analysts believe this modularity significantly lowers adoption friction and enables faster time-to-value across AI agent deployments.
What are other enterprise vendors and AI developers doing to adopt MCP or similar open protocols?
Outside Salesforce, several major technology companies are aligning with the MCP standard or launching compatible approaches. Anthropic uses MCP to enable external tool calling within its Claude models, while OpenAI is building similar functionality into its Agents SDK. Microsoft has implemented parts of the protocol in Windows Copilot and Azure agent integrations. Google DeepMind’s Gemini and platforms like Wix, Replit, Sourcegraph, and Slack have also signaled support for tool use frameworks akin to MCP.
Although naming conventions vary, the underlying goal is consistent: to enable AI models to safely interact with external tools and data in a way that is auditable, standardized, and secure. Analysts observing this trend argue that MCP is the most concrete step yet toward unifying AI tooling standards. However, full interoperability will require widespread alignment on governance practices, model behavior boundaries, and execution sandboxing.
What are the key benefits of Model Context Protocol in terms of enterprise observability and risk management?
One of the core advantages of MCP is its alignment with modern observability frameworks. Salesforce’s Agentforce 3 implementation routes all MCP interactions through OpenTelemetry-compliant tracing, giving enterprises full visibility into what tools agents are calling, with what parameters, and what outputs they receive. This is critical in regulated environments where every AI action must be logged, reviewed, and, in some cases, certified.
From a security standpoint, MCP introduces structure into agent-to-system interactions. Instead of using hardcoded API keys or static calls, agents authenticate to MCP servers via secure tokens, session registries, or identity providers. This allows organizations to layer access control, usage throttling, and response validation into every transaction. While some critics point out that standards for registry validation and prompt injection mitigation are still evolving, early adopters believe the protocol’s design is robust enough to scale with future security enhancements.
How do developers implement MCP today, and what does the current ecosystem look like?
The developer ecosystem around MCP is still in its early stages but expanding rapidly. Developers can use SDKs and documentation provided by Anthropic and open-source contributors to build and deploy MCP servers. These servers typically expose JSON-RPC or HTTP endpoints and are often deployed in containerized environments. Salesforce customers can build their own custom MCP servers using Heroku-managed inference tools or MuleSoft-based conversion layers.
To streamline adoption, Salesforce is expected to add a centralized registry of vetted MCP actions to its AgentExchange marketplace later this year. This would allow developers to publish reusable tools with verified security and performance benchmarks. Developers working in industries like healthcare, fintech, and insurance are also exploring MCP as a method for securely exposing governed datasets to AI agents while maintaining compliance with privacy laws such as HIPAA and GDPR.
What challenges must MCP overcome before becoming the universal standard for AI agent integration?
Despite growing enthusiasm, MCP faces several challenges. First, standardization efforts around agent-to-agent interaction—such as Action-to-Action (A2A) and Agent Network Protocol (ANP)—are still in flux. If these evolve in different directions, MCP may need to adapt or interoperate across protocols. Second, registry governance remains a weak point, as there is currently no centralized authority to validate or audit tool definitions, which opens up risks around spoofed actions or prompt injection attacks.
Enterprises will also need to establish robust policy frameworks to determine which AI agents can call which tools, under what contexts, and with what fallback behaviors. Observability tools and sandbox environments will play a crucial role in managing these dynamics. Finally, MCP adoption depends on ecosystem network effects. As more vendors support the protocol and offer pre-built servers, the value for enterprise customers grows—potentially triggering a flywheel of adoption similar to what RESTful APIs or OAuth once experienced.
What is the future outlook for MCP and enterprise AI interoperability?
Industry observers expect MCP adoption to continue growing through 2025 and into 2026, particularly among enterprises looking to move beyond experimental AI tools into fully orchestrated agent networks. Analysts believe that Salesforce’s backing of MCP—combined with implementation maturity in Agentforce 3—positions the protocol as a leading standard for AI-to-enterprise integration.
If registry governance, security hardening, and developer tooling continue to improve, MCP could become the default way that AI agents interface with core systems across sectors. Institutional investors tracking platform extensibility, time-to-value, and agent deployment rates will likely view MCP as a bellwether for enterprise AI maturity. Over time, this could influence how vendors package AI offerings, how buyers evaluate platform readiness, and how regulators define safe AI interaction boundaries.
In short, MCP is no longer a speculative framework. With support from Salesforce, Anthropic, and a growing list of tech providers, it is quickly becoming a foundational building block for the agentic AI era. Its real test, however, will be whether it can deliver universal interoperability without sacrificing trust, control, or accountability—criteria that enterprises are unlikely to compromise on.
Discover more from Business-News-Today.com
Subscribe to get the latest posts sent to your email.