Use Your Agent via MCP Server
MCP makes your agent interoperable. Integrate with tools, editors, or systems that speak the same open protocol.
Why Enable MCP Support for Your Agent?
MCP is an open standard that enables AI agents to access context, data, and tools across different systems and environments. When deployed as an MCP server:
- Your agent can be called by any MCP-enabled app
- It can maintain multi-turn context across tools
- It can execute workflows and respond using live data
This is ideal for:
- AI development teams using workbenches
- IDE or toolchain integrations
- Enterprise systems requiring protocol-based interoperability
Prerequisites
Before connecting your agent to an MCP interface, ensure:
- Your agent is deployed (test or production)
- MCP is enabled in Agent Settings
- You have the correct MCP endpoint URL for your environment
Step-by-Step: Enable and Integrate MCP
Step 1: Toggle MCP in Agent Settings
- Open your agent in Studio
- Click Settings
- Find the MCP section
- Toggle MCP on
This will generate unique MCP URLs for test and production.
Step 2: Copy Your MCP Endpoint URL
After enabling MCP, scroll to the Environment section under Deployments to copy your URLs.
-
Test environment:
https://<agent-id>.agent.stage.smyth.ai/mcp/sse
-
Production environment:
https://<agent-id>.agent.psatge.smyth.ai/mcp/sse
Choose based on your integration stage:
- Use the test URL while developing
- Switch to the production URL when ready for live deployment
Security Considerations for MCP Endpoints
MCP currently lacks a built-in authentication standard.
To add auth protection, use:
- A custom proxy layer
- Third-party middleware
Always secure your production MCP endpoints appropriately.
Common Use Cases for MCP Integration
- Connect your agent to VS Code or an AI workbench
- Integrate with internal AI dashboards or tools
- Add protocol-level context-awareness to enterprise platforms
What’s Next?
With MCP enabled, your SmythOS agent becomes a first-class participant in any system using the Model Context Protocol. From here, you can:
- View logs and debug via Settings → Deployments → Logs
- Deploy as LLM to expose it via
/chat/completions
- Deploy as ChatGPT to integrate with GPT Builder
Same agent, multiple endpoints. Maximum reusability.