AWS Open Source Blog
AWS is excited to continue our support for Model Context Protocol (MCP) as it moves under the Linux Foundation. This move enables us, our partners, and our customers to be more confident in the long-term success of the protocol which has become a standard component of agentic architectures. By open sourcing MCP in 2024, Anthropic enabled the project’s widespread development and adoption. Under neutral governance at the Linux Foundation, developers can continue to build and innovate with MCP with the knowledge that no single company will dictate its future direction.
In the past year since the protocol was …
AWS Open Source Blog
AWS is excited to continue our support for Model Context Protocol (MCP) as it moves under the Linux Foundation. This move enables us, our partners, and our customers to be more confident in the long-term success of the protocol which has become a standard component of agentic architectures. By open sourcing MCP in 2024, Anthropic enabled the project’s widespread development and adoption. Under neutral governance at the Linux Foundation, developers can continue to build and innovate with MCP with the knowledge that no single company will dictate its future direction.
In the past year since the protocol was first released, AI agents have changed the way builders interact with software. MCP has changed how agents interact with the world. Agent interactions with tools, data, and services are now critical to how we build and deploy software – this is why AWS is all-in on MCP. We have built numerous open source MCP servers, and enabled our customers to use MCP in our AI code assistants and deliver their own MCP servers on our serverless Amazon Bedrock AgentCore service. We look forward to continuing these efforts to grow and mature the ecosystem around MCP.
AWS will also continue to be a major contributor to MCP. In the 2025-11-25 version of MCP we introduced Tasks (SEP-1686), comprehensive asynchronous operation support to MCP through a flexible “call-now, fetch-later” execution pattern. Our contributions to MCP specification and implementations have also enabled new interaction paradigms. For example, the 2025-06-18 MCP specification release included a new human-in-the-loop interaction paradigm called “Elicitations” which is now available in most major MCP implementations. That release also included “structured tool output” enabling MCP clients to understand the schema for data from tool calls.
Now that MCP is owned by and governed as an official Linux Foundation project we are eager to welcome more partners and contributors to collaborate and accelerate its development. We plan to deepen our support and contributions to the specification and implementations, and expect many other contributors to do the same. Together, our contributions will continue to redefine how software is built and deployed.
MCP Tasks
Today, when an agent calls an MCP tool, the application simply sends a request to the connected server and waits for the result. Applications interested in the status of that request can leverage MCP’s progress-tracking functionality to receive relevant notifications. However, this requires the tool to explicitly emit progress updates, and it requires the client to maintain a persistent connection to receive them – creating scenarios where a tool call may have been dropped without the application knowing if a response or notification will ever arrive. Similarly, if a tool result is lost due to application errors or network conditions, there is no way for a client to explicitly retrieve that result after the tool call has completed, forcing the client to call the tool again. This is undesirable for expensive or long-running operations taking minutes or more.
Tasks solve this problem by enabling fault-tolerant execution of MCP operations through asynchronous polling patterns. Rather than blocking on expensive operations, task-aware clients immediately receive a task handle after making a supported request to poll for the status and results of the operation – a pattern which should be familiar to customers already using services like Amazon Simple Queue Service (SQS) and AWS Step Functions. This allows operations to survive network interruptions, restarts, and deployment cycles while maintaining access to execution results.
The design lays the groundwork for future webhook support to enable serverless agent architectures, in addition to subtasks to enable multistep agentic workflows. This addresses community and customer requests for inter-agent communication capabilities and simplifies integration with their existing asynchronous workflows on AWS.
MCP Elicitations
MCP has become a standard for connecting generative AI agents with external systems. Before Elicitations, however, there was no way for an MCP server to get additional details from the user when needed. Everything needed by a tool call had to be provided or accessible by the tool. A new interaction model was needed.
A typical scenario is when a user’s profile does not contain all the information needed to handle their request. For instance, with Elicitations if the user wants to book a flight but hasn’t specified their preferred airline, the MCP server can elicit that information before it continues. On the chat agent side this could be:
user> book a trip from Denver to SF next week, departing Monday and returning Friday ai> I see several options but don’t know your preferred airline. What is your preference? user> United ai> Ok, Here is a good option for you on United: … Would you like me to book it? user> That looks good. Book it.
In this case the searchFlights tool just needs to know which user is making the request and their dates and cities. That tool can then connect to the travel system and retrieve the user’s travel preferences. Only when a preferred airline is not set, it will elicit that information, then continue with the searchFlights tool call once the user has provided it. Without Elicitations there would be many tool calls needed to get all the necessary information. The tool call parameters would be bloated with details that should be retrieved from external systems.
To build this kind of human-in-the-loop tool call with MCP, we can use the Python MCP implementation with the FastMCP library and implement the elicitation:
# Note: Some backing classes omitted for brevity
# Holds the response from the elicitation
class UserResponse(BaseModel):
airline: str = Field(description="Airline name")
@mcp.tool()
async def search_flights(depart: str, arrive: str, date: datetime, ctx: Context[ServerSession, None]) -> FlightOptions:
"""search for 1-way flights"""
user = await user_from_ctx(ctx)
if user.preferred_airline is None:
# make the elicitation request
result = await ctx.elicit(message="What is your preferred airline?", schema=UserResponse)
if result.action == "accept":
user.preferred_airline = result.data.airline
return await flight_service.search(depart, arrive, date, user)
Elicitations has already seen broad adoption in many MCP implementations and we are excited to see this feature enable richer interactions in agentic applications. To learn more about Elicitations, see the MCP specification details, the FastMCP docs, and the Java SDK docs.
Future MCP Improvements
New MCP features like Elicitations and Tasks enable enhanced interactions models between AI agents and backing services. We will continue to explore and develop new interactions models that enable our customers to deliver integrated AI solutions for code assistants and general agentic applications.
As the MCP protocol evolves we continue to expand support in a variety of other AWS services and tools. We now have dozens of AWS MCP servers to provide AI code assistants with context for using AWS services, interacting with users’ AWS accounts, and AWS knowledge / documentation.
The AWS Knowledge MCP server is now GA and our first remote MCP server, enabling it to be easily added to AI code assistants and hosted AI tools like claude.ai. The new AWS MCP Server is now in preview, providing a unified interface to access AWS documentation, generate and execute calls to over 15,000 AWS APIs including those for newly released services, and follow pre-built workflows called Agent standard operating procedures (SOPs) that guide AI agents through common tasks on AWS. We also have MCP support (local & remote) in our AI code assistants Kiro, Kiro CLI, and Amazon Q Developer in VS Code & IntelliJ.
For hosting MCP servers on AWS you can use Amazon Bedrock AgentCore which has first-class support for MCP servers whether the servers are run using an open source implementation or proxied to other runtimes like Lambda functions. Both options provide a serverless, managed environment for your MCP servers enabling integration into custom agents and AI code assistants.
The broad support for MCP at AWS has enabled customers to use MCP for productivity gains and to create integrated AI agents that connect to the data and processes they need. We are excited to continue evolving the MCP specification and implementations to further address our customers’ needs. With the move of MCP to the Linux Foundation we expect to see more expansive use of MCP in agentic architectures. Join us in contributing to the MCP community on GitHub.