Tools: Boost Your Agents with MCPs - MCP Fundamentals

Tools: Boost Your Agents with MCPs - MCP Fundamentals

What we need to do...

A. MCP Protocol Overview

Core Concepts of MCP

Communication & Message Types

Production Deployment with AgentCore

B. Key MCP Concepts

1. Building and Deploying an MCP Server

a. Define the Local Server

b. Deploy to AgentCore Runtime

2. Connecting a Strands Agent to a Remote Tool

C. Building an Analyst MCP Server

1. Key Implementation Details

a. Server Creation (Core Implementation Flow)

b. Tool Registration (Essential Tool Components)

c. Intelligence Error Handling

d: Best Practices for Developers We want to get every reader familiar with using MCP as a tool with LLMs to accomplish productivity tasks for our daily needs. We will see how LLMs can use multiple tools in concert to accomplish more advanced tasks. We will first learn the core concepts of MCP by building our first intelligent tool - an analyst that understands natural language and provides contextual responses. This exercise will introduce everyone to MCP, the universal open standard for connecting AI systems with data sources and demonstrate its capabilities. An analyst MCP server that goes beyond simple arithmetic: The Model Context Protocol (MCP) is an open standard designed to create a universal interface between AI models and external tools, data, and services. By using Bedrock AgentCore Runtime, developers can transition these tools from local functions to secure, enterprise-grade managed microservices. The protocol replaces custom, one-off integrations with a standardized request-response pattern: MCP follows a structured flow to ensure the AI model understands and executes tools correctly: While local tools are useful for development, Bedrock AgentCore Runtime provides the infrastructure for production scaling: Summary Table: Benefits of Managed MCP Key Takeaway: This architecture allows developers to build a reusable library of secure MCP tools that any agent in an organization can invoke with minimal code. Deploying custom tools at scale requires moving beyond local functions to a managed infrastructure that is secure, authenticated, and scalable. Bedrock AgentCore Runtime enables the deployment of Model Context Protocol (MCP) servers as managed services, transitioning tools from local Python decorators to enterprise-grade microservices. The development process is simplified using the FastMCP framework, which automates protocol message formatting and JSON Schema generation. Using FastMCP, you can quickly define tools with standard Python type hints: Once containerized, the server is registered with the Runtime Client to become a managed cloud service: Strands Agents leverage the AgentCoreRuntime to connect to these remote services. The agent automatically manages the authentication handshake, treating the remote MCP server as a local capability. Summary of MCP Implementation Key Takeaway: Bedrock AgentCore Runtime acts as the "production glue" for AI agents. It shifts the responsibility for security, scaling, and server management to AWS, allowing organizations to maintain a secure library of reusable tools accessible via a single line of code. An Analyzer MCP Server is created using the FastMCP framework. It demonstrates how to turn standard Python functions into tools that an AI agent can use to solve natural language math problems. Let's build our first MCP server - an analyst that handles natural language math queries with context and error handling. A short excerpt of this code is shown below: To build a successful MCP server, focus on these four pillars: The server is built by defining a FastMCP instance and decorating functions to expose them as tools. Transport: Uses stdio (Standard Input/Output) for local communication between the AI and the server. Registration: The @mcp.tool decorator tells the AI what the tool does and what inputs it requires. To ensure the AI uses our tools correctly, every function should include: Key Takeaway: The Model Context Protocol (MCP) transforms isolated Python functions into intelligent, conversational tools that an AI agent can understand and execute. The transition from a local script to an enterprise-grade tool happens in three stages: Standardization (Local): Using FastMCP, we wrap standard Python functions with decorators. By providing strict type hints and descriptions, you create a "contract" that the AI model (like Claude or Nova) can read to understand exactly how to perform math/analysis or data tasks. Scalability (Managed): You move from running a script on your machine to deploying a containerized image via Bedrock AgentCore Runtime. This shifts the burden of server maintenance and scaling to AWS. Security (Enterprise): By integrating Amazon Cognito, you ensure that only authorized agents can trigger your tools, protecting sensitive operations like database searches or proprietary calculations. MCP isn't just about math; it's the universal glue for the AI era. Whether we are building a simple analyzer or a complex anomaly detection system, MCP allows us to build our logic once and use it across any AI model or team in our organization. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to ? It will become hidden in your post, but will still be visible via the comment's permalink. as well , this person and/or

Command

Copy

$ from mcp.server import FastMCP # Create the MCP server instance mcp = FastMCP("Analyzer") # Register a tool; FastMCP handles the schema generation automatically @mcp.tool(description="Add two numbers") def add(x: float, y: float) -> float: return x + y # Run using stdio for local testing mcp.run(transport="stdio") COMMAND_BLOCK: from mcp.server import FastMCP # Create the MCP server instance mcp = FastMCP("Analyzer") # Register a tool; FastMCP handles the schema generation automatically @mcp.tool(description="Add two numbers") def add(x: float, y: float) -> float: return x + y # Run using stdio for local testing mcp.run(transport="stdio") COMMAND_BLOCK: from mcp.server import FastMCP # Create the MCP server instance mcp = FastMCP("Analyzer") # Register a tool; FastMCP handles the schema generation automatically @mcp.tool(description="Add two numbers") def add(x: float, y: float) -> float: return x + y # Run using stdio for local testing mcp.run(transport="stdio") COMMAND_BLOCK: from bedrock_agentcore.tools.runtime_client import RuntimeClient runtime_client = RuntimeClient(region="us-east-1") # Deploy with Cognito security mcp_server = runtime_client.create_mcp_server( name="SearchService", image="your--weight: 500;">docker-image-uri", auth_config={ "type": "COGNITO", "user_pool_id": "us-east-1_xxxxxxxxx", "client_id": "xxxxxxxxxxxxxxxx" } ) COMMAND_BLOCK: from bedrock_agentcore.tools.runtime_client import RuntimeClient runtime_client = RuntimeClient(region="us-east-1") # Deploy with Cognito security mcp_server = runtime_client.create_mcp_server( name="SearchService", image="your--weight: 500;">docker-image-uri", auth_config={ "type": "COGNITO", "user_pool_id": "us-east-1_xxxxxxxxx", "client_id": "xxxxxxxxxxxxxxxx" } ) COMMAND_BLOCK: from bedrock_agentcore.tools.runtime_client import RuntimeClient runtime_client = RuntimeClient(region="us-east-1") # Deploy with Cognito security mcp_server = runtime_client.create_mcp_server( name="SearchService", image="your--weight: 500;">docker-image-uri", auth_config={ "type": "COGNITO", "user_pool_id": "us-east-1_xxxxxxxxx", "client_id": "xxxxxxxxxxxxxxxx" } ) COMMAND_BLOCK: from strands import Agent from strands.models import BedrockModel from strands_tools.runtime import AgentCoreRuntime # Connect to the deployed runtime agentcore_runtime = AgentCoreRuntime(region="us-east-1") # Create agent using the remote managed tool agent = Agent( model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"), tools=[agentcore_runtime.mcp_tool(server_name="SearchService")] ) # Execution: Discovery -> Intent Recognition -> Parameter Extraction -> Execution agent("What are the key highlights from the latest AWS Re:Invent?") COMMAND_BLOCK: from strands import Agent from strands.models import BedrockModel from strands_tools.runtime import AgentCoreRuntime # Connect to the deployed runtime agentcore_runtime = AgentCoreRuntime(region="us-east-1") # Create agent using the remote managed tool agent = Agent( model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"), tools=[agentcore_runtime.mcp_tool(server_name="SearchService")] ) # Execution: Discovery -> Intent Recognition -> Parameter Extraction -> Execution agent("What are the key highlights from the latest AWS Re:Invent?") COMMAND_BLOCK: from strands import Agent from strands.models import BedrockModel from strands_tools.runtime import AgentCoreRuntime # Connect to the deployed runtime agentcore_runtime = AgentCoreRuntime(region="us-east-1") # Create agent using the remote managed tool agent = Agent( model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"), tools=[agentcore_runtime.mcp_tool(server_name="SearchService")] ) # Execution: Discovery -> Intent Recognition -> Parameter Extraction -> Execution agent("What are the key highlights from the latest AWS Re:Invent?") COMMAND_BLOCK: from mcp.server import FastMCP import math # Create MCP server instance mcp = FastMCP("Analyzer Server") @mcp.tool(description="Add two numbers together") def add(x: float, y: float) -> float: """Add two numbers and return the result.""" return x + y ... @mcp.tool(description="Divide first number by second number") def divide(x: float, y: float) -> float: """Divide x by y and return the result.""" if y == 0: raise ValueError("Cannot divide by zero") return x / y ... if __name__ == "__main__": print("🔢 Starting Analyzer MCP Server...") mcp.run(transport="stdio") COMMAND_BLOCK: from mcp.server import FastMCP import math # Create MCP server instance mcp = FastMCP("Analyzer Server") @mcp.tool(description="Add two numbers together") def add(x: float, y: float) -> float: """Add two numbers and return the result.""" return x + y ... @mcp.tool(description="Divide first number by second number") def divide(x: float, y: float) -> float: """Divide x by y and return the result.""" if y == 0: raise ValueError("Cannot divide by zero") return x / y ... if __name__ == "__main__": print("🔢 Starting Analyzer MCP Server...") mcp.run(transport="stdio") COMMAND_BLOCK: from mcp.server import FastMCP import math # Create MCP server instance mcp = FastMCP("Analyzer Server") @mcp.tool(description="Add two numbers together") def add(x: float, y: float) -> float: """Add two numbers and return the result.""" return x + y ... @mcp.tool(description="Divide first number by second number") def divide(x: float, y: float) -> float: """Divide x by y and return the result.""" if y == 0: raise ValueError("Cannot divide by zero") return x / y ... if __name__ == "__main__": print("🔢 Starting Analyzer MCP Server...") mcp.run(transport="stdio") CODE_BLOCK: mcp = FastMCP("Analyzer Server") CODE_BLOCK: mcp = FastMCP("Analyzer Server") CODE_BLOCK: mcp = FastMCP("Analyzer Server") COMMAND_BLOCK: @mcp.tool(description="Add two numbers together") def add(x: float, y: float) -> float: COMMAND_BLOCK: @mcp.tool(description="Add two numbers together") def add(x: float, y: float) -> float: COMMAND_BLOCK: @mcp.tool(description="Add two numbers together") def add(x: float, y: float) -> float: CODE_BLOCK: if y == 0: raise ValueError("Cannot divide by zero") CODE_BLOCK: if y == 0: raise ValueError("Cannot divide by zero") CODE_BLOCK: if y == 0: raise ValueError("Cannot divide by zero") - MCP Architecture: Understand how MCP connects AI models to tools - Server Implementation: Build a working MCP server from scratch - Tool Registration: Register functions that AI models can discover - Response Formatting: Structure responses for optimal AI interaction - Integration Testing: Test our server with the application - Standardization: Provides a single protocol for all tools (e.g., same interface for analysis or weather tools). - Interoperability: Works across different models (Claude, GPT, Nova) using the same toolset. - Security: Implements sandboxed execution, parameter validation via JSON Schema, and controlled access patterns. - Tool Discovery (tools/list): The model identifies accessible tools and their required parameters (schemas). - Tool Execution (tools/call): The model sends structured arguments to a specific tool to perform tasks beyond its internal knowledge. - Transport Methods: Supports stdio (standard input/output) for local development, as well as HTTP and WebSockets for remote or real-time applications. - Managed Infrastructure: Moves tools to managed services, removing the burden of server maintenance and scaling. - Enhanced Security: Integrates with Amazon Cognito to enforce strict authentication, ensuring only authorized agents can trigger sensitive operations. - Simplified Integration: A Strands Agent can connect to a remote MCP server and handle authentication handshakes automatically, consuming cloud tools as if they were local functions. - Initialization: Creates a new MCP server with a descriptive name The name helps with debugging and tool discovery - Creates a new MCP server with a descriptive name - The name helps with debugging and tool discovery - Creates a new MCP server with a descriptive name - The name helps with debugging and tool discovery - Transport: Uses stdio (Standard Input/Output) for local communication between the AI and the server. - Registration: The @mcp.tool decorator tells the AI what the tool does and what inputs it requires. - Provides clear error messages (Natural Language Mapping) The AI automatically maps user phrases like "sum of 25 and 17" or "25 plus 17" to the add(x=25, y=17) function. - Prevents invalid operations (Complex Queries): The agent can "chain" tools. For a budget query, it might call subtract() multiple times to reach a final answer. - Errors are automatically sent back to AI agent (Graceful Failures): By raising a ValueError("Cannot divide by zero"), the error message is passed directly back to the AI, allowing it to explain the mistake to the user rather than just crashing. - Be Specific: Use clear descriptions like "Calculate 15% of 200" instead of "Does percentage stuff." - Strict Typing: Always use Python type hints to prevent the AI from sending the wrong data formats. - Logging: Use import logging to track how the AI invokes our tools in the background. - Memory: We can add "Context Awareness" by storing results in a list (e.g., calculation_history) so the agent can reference previous answers. - Standardization (Local): Using FastMCP, we wrap standard Python functions with decorators. By providing strict type hints and descriptions, you create a "contract" that the AI model (like Claude or Nova) can read to understand exactly how to perform math/analysis or data tasks. - Scalability (Managed): You move from running a script on your machine to deploying a containerized image via Bedrock AgentCore Runtime. This shifts the burden of server maintenance and scaling to AWS. - Security (Enterprise): By integrating Amazon Cognito, you ensure that only authorized agents can trigger your tools, protecting sensitive operations like database searches or proprietary calculations.