Introduction
As enterprises increasingly adopt AI to tackle real business challenges, one thing becomes clear: single, monolithic AI assistants are no longer sufficient. Real world workflows often span multiple systems, data sources, and business domains from CRM platforms like Salesforce to ticketing tools like Jira, internal knowledgebases, and custom applications. Handling these complexities requires modular, secure, and orchestrated multi-agent systems rather than a single assistant trying to do everything.
AWS offers a powerful approach to building such systems through AWS Strands Agent Patterns, integrated with Amazon Bedrock AgentCore a managed platform that provides runtime services, identity management, memory, observability, and **s…
Introduction
As enterprises increasingly adopt AI to tackle real business challenges, one thing becomes clear: single, monolithic AI assistants are no longer sufficient. Real world workflows often span multiple systems, data sources, and business domains from CRM platforms like Salesforce to ticketing tools like Jira, internal knowledgebases, and custom applications. Handling these complexities requires modular, secure, and orchestrated multi-agent systems rather than a single assistant trying to do everything.
AWS offers a powerful approach to building such systems through AWS Strands Agent Patterns, integrated with Amazon Bedrock AgentCore a managed platform that provides runtime services, identity management, memory, observability, and secure tool access.
The promise of generative AI lies not in standalone assistants but in multi-agent systems that collaborate intelligently across specialized tasks. As enterprises scale, workflows become too complex for a single agent: juggling specialized reasoning, robust error handling, secure access to sensitive tools, and context retention across sessions quickly exceeds the capacity of a monolithic system.
The solution is to design AI agents like microservices: modular, composable, and orchestrated. AWS provides both open source frameworks and managed services that make such architectures production ready.
In this post, we’ll explore how to build enterprise grade multi-agent AI systems using AWS Strands Agents and Amazon Bedrock AgentCore. We’ll cover four core Strands patterns:
Agents as Tools enabling specialized agents to perform focused tasks on behalf of others.
Swarms coordinating multiple agents to solve complex problems collaboratively.
Agent Graphs structuring dependencies and information flow between agents.
Workflows orchestrating multi-agent processes to execute end-to-end business operations.
Along the way, we’ll include Python examples and architectural insights showing how AgentCore’s Identity and Memory services empower agents with secure access, context retention, and session aware behavior essential for production grade AI systems in enterprise environments.
Before diving into patterns, let’s set context
The Challenge of Enterprise Complexity
In modern enterprises, business-critical data is scattered across multiple systems:
- CRM platforms like Salesforce manage sales, accounts, and pipelines.
- Knowledge bases such as SharePoint, Confluence, or custom repositories store organizational knowledge.
- Operational systems like Jira, ServiceNow, and Workday handle workflows, tickets, and HR processes.
- Analytics platforms provide structured data for decision-making.
Identity systems like Okta or Azure AD govern authentication and secure access. No single AI agent can reason effectively across all these domains. Enterprises need:
Domain specialization: Each agent focuses on a specific area of expertise.
Structured decision flows: Clear pathways for reasoning and action.
Policy and access control: Secure, compliant system interactions.
Integrated memory and context: Agents retain relevant knowledge consistently across sessions.
Strands agents break reasoning into modular, focused units, enabling multi-agent architectures where each agent specializes in specific domains. But to operate reliably at enterprise scale, they require robust infrastructure this is where AWS AgentCore comes in.
AgentCore: AWS’s Enterprise Ready Multi-Agent Platform Turning Agents into Enterprise Microservices
Amazon Bedrock AgentCore provides the foundation for deploying secure, autonomous agents at scale:
Runtime: Serverless execution environments with session isolation, supporting long running workloads.
Memory: Persistent context storage that maintains agent knowledge across sessions.
Identity: Fine grained authentication and access control integrated with enterprise IdPs like Okta or Cognito.
Observability: Integrated metrics, logs, and traces with CloudWatch and OpenTelemetry.
These capabilities allow multi-agent systems to operate with consistent security, memory, monitoring, and access controls — all without building custom infrastructure from scratch.
Gateway: Unified Tool Access
- The AgentCore Gateway provides a unified connectivity layer for multi-agent systems:
- Single entry point for tools: Converts APIs, Lambda functions, AWS services, and even existing MCP servers into agent-ready tools.
- Protocol translation: Automatically converts MCP requests to REST, Lambda, or other endpoints.
- Security and credential management: Handles authentication and secure access for agents invoking tools.
- Semantic tool discovery: Agents can find and use the right tools based on context.
- Reduced integration overhead: Eliminates the need for custom MCP server SDKs or extensive glue code.
By combining Strands agents with AgentCore and Gateway, enterprises can deploy modular, secure multi agent systems that:
- Reason over multiple domains and systems simultaneously.
- Maintain persistent memory and context.
- Operate securely with enterprise-grade identity and access controls.
- Access internal and external tools via a unified interface without building custom SDKs.
In short, AWS AgentCore with Gateway transforms multi agent AI into enterprise grade microservices capable of reasoning, acting, and collaborating across complex workflows.
1. Agents as Tools (Hierarchical Delegation)
The Agents as Tools pattern mirrors hierarchical microservice architectures. A top level orchestrator delegates tasks to specialized agents, each responsible for a specific domain. This approach is ideal when tasks require distinct expertise, such as IT support, HR, or finance.
Enterprise Scenario: An IT helpdesk receives tickets spanning infrastructure, access management, and application support. Each ticket must be routed to the correct specialist.
from strands import Agent
from strands_tools import retrieve, http_request
# Specialized IT agents
def infrastructure_agent(ticket: str):
agent = Agent(
system_prompt="You are an infrastructure specialist. Diagnose servers, networks, and cloud resources.",
tools=[server_diagnostics, network_analyzer, aws_console_api]
)
return agent(ticket)
def access_management_agent(request: str):
agent = Agent(
system_prompt="You are an IAM specialist. Manage user access, permissions, and authentication.",
tools=[active_directory, okta_api, permission_validator]
)
return agent(request)
def application_support_agent(issue: str):
agent = Agent(
system_prompt="You are an application support specialist for Salesforce, SAP, and enterprise apps.",
tools=[app_logs_analyzer, knowledge_base_search, vendor_api]
)
return agent(issue)
# Supervisor orchestrator
it_coordinator = Agent(
system_prompt="""You are the IT Support Coordinator. Analyze incoming tickets,
delegate to specialized agents, and consolidate responses. Update ServiceNow after resolution.""",
tools=[infrastructure_agent, access_management_agent, application_support_agent]
)
# Process example ticket
ticket = "User cannot access SAP system after password reset."
response = it_coordinator(ticket)
print(response)
Key Benefits:
- Faster and more accurate resolutions by delegating to domain experts
- Scalability: new specialists can be added without changing the orchestrator
- Knowledge preservation: each agent maintains domain expertise
- Use Cases: Enterprise IT, customer support, multi-department workflows ## 2. Swarms (Peer Collaboration)
Swarms implement decentralized peer-to-peer collaboration. Agents iteratively exchange information, refine each other’s outputs, and collectively produce a result. This pattern excels for multi-stakeholder decision making or complex analysis.
Enterprise Scenario: HR teams must provide a policy recommendation for a senior engineer requesting remote work across states. The analysis must consider compensation, benefits, compliance, and company culture.
from strands import Agent
from strands.multiagent import Swarm
from strands.models import BedrockModel
# HR specialist agents
comp_agent = Agent(
name="compensation_specialist",
system_prompt="Analyze salary bands, bonuses, and market rates.",
model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"),
tools=[salary_database, market_data_api, equity_calculator]
)
benefits_agent = Agent(
name="benefits_specialist",
system_prompt="Evaluate health insurance, 401k, PTO, and perks.",
model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"),
tools=[benefits_catalog, cost_calculator, provider_api]
)
compliance_agent = Agent(
name="compliance_specialist",
system_prompt="Verify labor law compliance across states.",
model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"),
tools=[legal_database, regulation_checker, audit_log]
)
culture_agent = Agent(
name="culture_specialist",
system_prompt="Assess alignment with company values and team dynamics.",
model=BedrockModel(model_id="us.amazon.nova-pro-v1:0"),
tools=[employee_surveys, culture_metrics, dei_guidelines]
)
# Configure HR policy swarm
hr_swarm = Swarm(
agents=[comp_agent, benefits_agent, compliance_agent, culture_agent],
max_handoffs=3,
max_iterations=2,
execution_timeout=180.0,
node_timeout=45.0
)
# Analyze scenario
scenario = """
Senior engineer in California requesting remote work from Texas.
Salary: $180K. Must consider benefits, taxes, equity vesting, team collaboration, and multi-state laws.
"""
result = hr_swarm(scenario)
print(f"Policy recommendation: {result.final_response}")
print(f"Specialists consulted: {[node.node_id for node in result.node_history]}")
Key Benefits:
- Parallel refinement of knowledge from multiple perspectives
- Audit trail of contributions from each specialist
- Faster decision-making than sequential review
Use Cases: HR policy analysis, multi-stakeholder decision-making, quality assurance
3. Graphs (Structured Workflows)
Graphs provide deterministic, structured flows where agents communicate through predefined edges. Each node represents an agent, and edges define the flow of information. This pattern ensures predictable outcomes while maintaining modularity.
Enterprise Scenario: A SharePoint-based enterprise RAG system retrieves documents, identifies relationships, validates access, and synthesizes answers.
from strands import Agent
from strands_tools import agent_graph
# Document processing agents
classifier = Agent("Classify documents by type and sensitivity.", tools=[metadata_extractor, taxonomy_api])
searcher = Agent("Perform semantic vector search.", tools=[vector_db_query, embedding_model])
relationship = Agent("Map document relationships.", tools=[graph_database, citation_tracker])
access_validator = Agent("Verify user permissions.", tools=[azure_ad_api, dlp_checker])
synthesizer = Agent("Combine results into coherent answer.", tools=[summarization_model, citation_formatter])
# Build agent graph
graph_builder = agent_graph.GraphBuilder()
graph_builder.add_node(classifier, "classify")
graph_builder.add_node(searcher, "search")
graph_builder.add_node(relationship, "relationships")
graph_builder.add_node(access_validator, "security")
graph_builder.add_node(synthesizer, "synthesize")
graph_builder.add_edge("classify", "search")
graph_builder.add_edge("classify", "relationships")
graph_builder.add_edge("search", "security")
graph_builder.add_edge("relationships", "security")
graph_builder.add_edge("security", "synthesize")
graph_builder.set_entry_point("classify")
doc_graph = graph_builder.build()
query = "Company policy on hybrid work and which teams implemented it successfully?"
result = doc_graph(query)
print(result)
Key Benefits:
- Secure, predictable data flows
- Access control and compliance enforcement
- Scalable document processing pipelines
Use Cases: Enterprise search, regulated document processing, complex analytics pipelines
4. Workflows (Sequential and Parallel Orchestration)
Workflows orchestrate agents in sequential or parallel steps, ideal for deterministic processes with dependencies. Agents can run concurrently where possible, ensuring efficiency while maintaining governance.
Enterprise Scenario: Employee onboarding involves HR setup, IT provisioning, access rights, training, and manager notifications.
from strands import Agent
import asyncio
# Onboarding agents
hr_agent = Agent("Create employee record and payroll.", tools=[workday_api])
it_agent = Agent("Provision IT resources.", tools=[azure_ad, mdm])
access_agent = Agent("Assign system access.", tools=[okta_api])
training_agent = Agent("Schedule training.", tools=[lms_api])
manager_agent = Agent("Notify manager of completion.", tools=[email_api])
def onboarding_workflow(employee_data):
hr_result = hr_agent(f"Create record for {employee_data['name']}")
async def parallel_setup():
it_task = asyncio.create_task(it_agent(f"Provision IT for {employee_data['name']}"))
access_task = asyncio.create_task(access_agent(f"Configure access for {employee_data['name']}"))
return await asyncio.gather(it_task, access_task)
it_result, access_result = asyncio.run(parallel_setup())
training_result = training_agent(f"Schedule training for {employee_data['name']}")
manager_result = manager_agent(f"Notify completion for {employee_data['name']}")
return {
"hr": hr_result,
"it": it_result,
"access": access_result,
"training": training_result,
"manager": manager_result
}
employee = {"name": "Jane Smith"}
onboarding_result = onboarding_workflow(employee)
print(onboarding_result)
Key Benefits:
- Reduced onboarding time and zero missed steps
- Parallel efficiency for IT and access provisioning
- Full audit trail for compliance and governance
Use Cases: Onboarding/offboarding, compliance pipelines, sequential approval processes
Combining Patterns in Complex Systems
In practice, enterprises combine these patterns for end-to-end solutions. For instance, a legal contract review system may:
Use a Swarm to analyze clauses from multiple perspectives
Route outputs through a Graph for structured approvals
Invoke Agents as Tools for domain specific validation
Use a Workflow to sequence notifications, archival, and audit logging
This ensures accuracy, scalability, observability, and governance, just like a robust microservices architecture.
Conclusion
AWS Strands Agent Patterns allow organizations to design AI systems like microservices: modular, specialized, scalable, and observable. By combining Agents as Tools, Swarms, Graphs, and Workflows, enterprises can tackle complex workflows with accuracy, governance, and resilience.
From IT ticketing to HR policy, document retrieval, and onboarding pipelines, multi-agent AI transforms enterprise operations into efficient, auditable, and production-ready systems.
Thanks Sreeni Ramadorai