As developers, we often build clients to communicate with servers. But what happens when that server can speak multiple languages? Not human languages, but transport protocols. One moment you’re talking over stdio, the next over Server-Sent Events (SSE), and tomorrow it might be raw HTTP or WebSockets. This is a common challenge in modern infrastructure, and it’s one we tackled head-on in our ansible-collection-mcp-audit project.
In this post, we’ll go deep on the design patterns we used to build a clean, transport-agnostic client in Python. We’ll look at how an async context manager, a simple factory, and a well-defined class structure can tame the complexity of multi-protocol communication. This isn’t just about the Model Context Protocol (MCP); these patterns are ap…
As developers, we often build clients to communicate with servers. But what happens when that server can speak multiple languages? Not human languages, but transport protocols. One moment you’re talking over stdio, the next over Server-Sent Events (SSE), and tomorrow it might be raw HTTP or WebSockets. This is a common challenge in modern infrastructure, and it’s one we tackled head-on in our ansible-collection-mcp-audit project.
In this post, we’ll go deep on the design patterns we used to build a clean, transport-agnostic client in Python. We’ll look at how an async context manager, a simple factory, and a well-defined class structure can tame the complexity of multi-protocol communication. This isn’t just about the Model Context Protocol (MCP); these patterns are applicable to any project that needs to support multiple ways of talking to a service.
The Challenge: One Client, Three Protocols
The goal was to create a single, unified client that could communicate with an MCP server regardless of the underlying transport. The initial requirements were:
- stdio: For local, process-based communication. Ideal for testing local scripts and servers.
- SSE (Server-Sent Events): For persistent, one-way communication over HTTP. Great for remote servers that push updates.
- HTTP: For standard request/response communication. A common requirement for web-based services.
The naive approach would be to write a bunch of if/elif/else statements every time we need to make a call. You’ve seen that code. We’ve all written that code. It quickly becomes a tangled mess that’s impossible to maintain.
We needed an abstraction. A clean interface that would hide the messy details of each protocol and present a simple, consistent set of methods to the rest of the application.
The Solution: The MCPClient Class
The heart of our solution is the MCPClient class. It serves as the single entry point for all interactions with an MCP server. Here’s the core design philosophy:
- Initialize with Configuration: The client is initialized with all the necessary configuration for all supported transports.
- A Single
connectMethod: A powerfulconnectmethod, implemented as an async context manager, handles the protocol-specific connection logic. - Consistent API: Once connected, all other methods (
list_tools,call_tool, etc.) don’t need to know or care about the underlying transport.
Let’s look at the __init__ method to see how this is set up.
# File: plugins/module_utils/mcp_client.py
# Lines: 73-121
class MCPClient:
SUPPORTED_TRANSPORTS: ClassVar[list[str]] = ["stdio", "sse", "http"]
def __init__(
self,
transport: str = "stdio",
server_command: str | None = None,
server_args: list[str] | None = None,
server_url: str | None = None,
server_headers: dict[str, str] | None = None,
timeout: int = 30,
) -> None:
if transport not in self.SUPPORTED_TRANSPORTS:
raise MCPClientError(f"Unsupported transport '{transport}'")
self.transport = transport
self.timeout = timeout
self.session: ClientSession | None = None
# Validate transport-specific parameters
if transport == "stdio":
if not server_command:
raise MCPClientError("server_command is required for stdio transport")
self.server_command = server_command
self.server_args = server_args or []
elif transport in ("sse", "http"):
if not server_url:
raise MCPClientError(f"server_url is required for {transport} transport")
self.server_url = server_url
self.server_headers = server_headers or {}
Notice how the constructor takes all possible parameters but only validates and stores the ones relevant to the selected transport. This keeps the initialization logic clean and ensures that the client is in a valid state from the moment it’s created.
The Magic of the Async Context Manager
The real power of this abstraction comes from the connect method. We implemented it as an asynccontextmanager, which is a perfect fit for managing the lifecycle of a network connection.
Here’s a simplified view of the implementation:
# File: plugins/module_utils/mcp_client.py
# Lines: 122-171
from contextlib import asynccontextmanager
from mcp.client.stdio import stdio_client
from mcp.client.sse import sse_client
from mcp import ClientSession
@asynccontextmanager
async def connect(self):
"""Establish connection to the MCP server as an async context manager."""
try:
if self.transport == "stdio":
server_params = StdioServerParameters(command=self.server_command, args=self.server_args)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
self.session = session
await session.initialize()
yield self
elif self.transport == "sse":
async with sse_client(self.server_url, self.server_headers) as (read, write):
async with ClientSession(read, write) as session:
self.session = session
await session.initialize()
yield self
elif self.transport == "http":
raise MCPTransportError("HTTP transport not yet implemented")
except Exception as e:
raise MCPConnectionError(f"Failed to connect via {self.transport}: {e!s}") from e
finally:
self.session = None # Ensure session is cleared on exit
This pattern is incredibly powerful. Let’s break down what it’s doing:
- Protocol-Specific Logic: The
if/elifblock contains the only protocol-specific connection logic in the entire client. - Leveraging the SDK: It uses the appropriate client function from the MCP Python SDK (
stdio_clientorsse_client). - Session Management: It creates a
ClientSessionand performs the initial handshake (session.initialize()). - Yielding Control: The
yield selfis the crucial part. It passes the connected, ready-to-use client instance to the calling code. - Guaranteed Cleanup: The
finallyblock ensures that the session is torn down and resources are released, no matter what happens inside thewithblock. This prevents resource leaks, which are notoriously hard to debug.
A Clean and Consistent API
With the connection handled by the context manager, the rest of the client’s methods become beautifully simple. They don’t need to know anything about the transport; they just use the self.session object that was set up during the connection.
# File: plugins/module_utils/mcp_client.py
# Lines: 173-213
async def list_tools(self) -> list[Tool]:
"""List all tools available on the MCP server."""
if not self.session:
raise MCPClientError("Not connected to MCP server")
try:
response = await self.session.list_tools()
return response.tools
except Exception as e:
raise MCPClientError(f"Failed to list tools: {e!s}") from e
async def call_tool(self, tool_name: str, arguments: dict[str, Any] | None = None) -> Any:
"""Call a tool on the MCP server."""
if not self.session:
raise MCPClientError("Not connected to MCP server")
try:
response = await self.session.call_tool(tool_name, arguments or {})
return response
except Exception as e:
raise MCPClientError(f"Failed to call tool '{tool_name}': {e!s}") from e
This is the payoff for our architectural efforts. The code is clean, readable, and easy to test. Adding a new method is trivial, and it will automatically work with all supported transports.
Visualizing the Abstraction
Here’s a diagram that illustrates the abstraction layers:
graph TD
subgraph "Application Layer"
A[Ansible Module]
end
subgraph "Abstraction Layer (MCPClient)"
B(connect() Context Manager)
C(list_tools(), call_tool(), etc.)
end
subgraph "Transport Layer"
D[stdio_client]
E[sse_client]
F[http_client (future)]
end
subgraph "Protocol Layer"
G[MCP Python SDK]
end
A --> B
A --> C
B --> D
B --> E
B --> F
C --> G
D --> G
E --> G
F --> G
The application layer (our Ansible modules) only ever talks to the MCPClient. The connect method acts as a gateway to the transport layer, and all subsequent calls go through the consistent API provided by the protocol layer. It’s a clean separation of concerns that makes the entire system robust and extensible.
Conclusion: Patterns for Maintainable Code
Building a multi-transport client doesn’t have to be a nightmare of nested if statements. By applying a few key design patterns, we were able to create a solution that is:
- Maintainable: Protocol-specific code is isolated in one place.
- Extensible: Adding a new transport (like WebSockets) would involve modifying only the
connectmethod. - Robust: The context manager ensures that connections are always cleaned up properly.
- Easy to Use: The rest of the application interacts with a simple, consistent API.
I’ve found this pattern to be incredibly effective in a variety of projects. Whether you’re working with different database drivers, message queues, or cloud APIs, the core principles of configuration-driven initialization and a context-managed connection lifecycle can save you from a world of technical debt.
What are your favorite patterns for handling multi-protocol or multi-provider clients? Share your thoughts in the comments below!
Links
Tosin Akinosho. (2025). ansible-collection-mcp-audit. GitHub Repository. https://github.com/tosin2013/ansible-collection-mcp-audit
Model Context Protocol. (2025). Protocol Documentation. https://modelcontextprotocol.io/