🦄 Making great presentations more accessible. This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - The Shapeshifting Application: Architecture That Transforms Across AWS (CNS426)
In this video, AWS Solutions Architects Sai Charan Teja Gopaluni and Leandro Cavalcante Damascena demonstrate how to deploy a single application across multiple AWS compute platforms (Lambda, ECS, EKS) using Clean Architecture principles. They show the evolution from monolithic to microservices to clean architecture, emphasizing separation of business logic from infras…
🦄 Making great presentations more accessible. This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - The Shapeshifting Application: Architecture That Transforms Across AWS (CNS426)
In this video, AWS Solutions Architects Sai Charan Teja Gopaluni and Leandro Cavalcante Damascena demonstrate how to deploy a single application across multiple AWS compute platforms (Lambda, ECS, EKS) using Clean Architecture principles. They show the evolution from monolithic to microservices to clean architecture, emphasizing separation of business logic from infrastructure through ports and adapters pattern. Using a FastAPI e-commerce application example, they live-code the implementation of ConfigPort interfaces and platform-specific adapters (LambdaConfigAdapter, ECSConfigAdapter) that handle environment-specific configurations like DynamoDB table names. The demonstration includes deploying the same containerized application to EKS Auto Mode and Lambda with ALB, proving portability across compute options without rewriting core business logic.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: The Challenge of Choosing the Right Compute Service
All right, hey guys. Hi. Thank you for being here today. We’re exactly at the 0.5 mark at re:Invent along with the step count. I think you’re also getting lots of coffee. Hope you’re doing good. Thanks for showing up to the session.
But before we kickstart the session though, my name is Sai Charan Teja Gopaluni. I’m a Senior Specialist Solutions Architect. My domain area is containers. I’ve been with AWS for about 9.5 years now, and I do everything containers, ECS, EKS. And with me I’m Leandro. Yeah, I’m Leandro Cavalcante Damascena. I’m a Senior Solutions Architect, and I’m doing everything in Colombia. So today we play a role of like the operation guy and the developer guy, one that loves containers, the other that loves serverless.
Yeah, I guess so. So in that line though, let’s talk this, right? So if I were to say you have to deploy every workload that you’re gonna deploy on AWS for the next five years in just one compute service, which one would it be? Show of hands, Lambda. Yeah, okay. ECS. Okay, I see more. EKS. Oh, ECS wins, I guess.
Okay, so that’s the conundrum that we were gonna address with this session actually. So within an organization, we often have, as a platform, if I was gonna roleplay as a platform engineer, a developer would ask me for different kinds of platforms that suit different use cases, whether it be Lambda, whether it be ECS or EKS in the container world. So how we can devise an app that can be deployed across various services is what we’re gonna talk through as a part of this.
The slide deck is gonna be short. We’re gonna jump into the coding very soon. The slide deck, this is gonna be the agenda for today. We’re gonna touch what are some modern architecture requirements, then how you can evolve your software architecture patterns to suit that, and eventually we’re gonna go into the deployment options that are available, and thereafter it’s gonna be live coding and let’s hope the demo gods and everything will work.
Modern Architecture Requirements and Software Evolution Patterns
With that said, okay. When we talk about modern architectures, right, let’s take an e-commerce app as an example, like Amazon. You would have these frontends, which is UI, then orders, catalogs, checkouts. Depending upon the functionality of these apps, like we were discussing as a prologue, we’ll have different compute that may be better suited for the purpose-built requirement. And it doesn’t stop there. After you deploy the application, you then go into external services, whether it be services that run on your on-premises, elsewhere, or additional AWS services like the persistence layer that we talked about.
And at scale, managing this becomes a big challenge. Yes, you can have GitOps workflow. You can have these commits that are coming in, and there could be manifests that are refactored across different compute options, but at scale it becomes challenging when you have multiple architectures involved in an organization.
And that brings us to the need to have your software evolve along with your platform patterns that you devise within your company.
First thing is the traditional one, which is monolithic architecture. This is going to be the most fundamental unified code base where you have your domain entities, business logic, everything packaged together. Then we evolve into our first level of evolution, which is this microservice architecture where the layers are separated, whether it be presentation layer, data layer, and there will be a clear hierarchical structure here between these layers. And at the end we could have this clean architecture where the actual domain entity, which is the core business logic, is the center of the application.
And there will be several use cases around it like the ports and adapters model where you can evolve the application into different use cases while keeping this core logic, core business logic, the same, and the constructs around it change depending upon the platform you’re deploying it to. In today’s talk, most of our discussion will center around the clean architecture pattern.
And when you have these kind of patterns, whether it be the monolith, microservices, or clean architecture on AWS, especially for the context of this talk, we are touching both serverless and container solutions which is Lambda and ECS, EKS, and they provide us with various options to deploy these. You can have a container built and be deployed on Lambda as well as ECS and EKS.
You could have your application packages as a ZIP package, and that could be deployed on Lambda as well. With multiple launches that came in as a part of this re:Invent and the last re:Invent, EKS is going to provide you compute options such as auto mode to automate your data plane infrastructure. ECS has come up with ECS managed instances, which provides you similar behavior, and Lambda just a couple of days ago, which is his favorite project, has come up with Lambda-managed instances that supports similar functionality. I hope you liked it. I think that’s a good segue. We’re going to stop here on the fundamentals that we’re going to touch today and hand it over to you.
Real-World Challenges in Moving Applications Across Platforms
Yeah, thanks a lot. Before we start, you can switch to my computer. So before we start, how many of you had to really move applications across different platforms? I mean, between Lambda and ECS, between ECS and EKS. Sorry, sometimes I use this paper because I’m an old-school guy. And what challenges did you face when you were moving applications across different platforms? Does someone want to share a good story or a bad story to talk about this?
Based on our experience in AWS, we see that customers start usually with Lambda because Lambda is serverless. Lambda is easy to scale, and to be honest, Lambda is amazing. But it depends on the requirements, it depends on the business requirements. Sometimes your company says, oh no, you need to move to ECS or you need now to move to EKS because of my platform teams or whatever. Any requirements or any decisions that your company must have, and then when it happens you feel that you have to rewrite at least half of your application because now you are super dependent on the platform. Of course, it depends on the case. I can see many of you, I imagine many of you are using Clean Architecture, hexagonal or ports and adapters, whatever name you can call this. Some of you are probably already using that and are not suffering a lot, but if not, this is what can happen.
So now we start doing some code. Today we are going through this code. Just to touch on some of the points Leandro was suggesting, right, you start with Lambda being easy, but then you run into challenges of execution timeouts, the memory that it can support, the resources that it can take, and then you would figure out that, hey, now I might want to move to container services, and then you would go into something like ECS and that may be a fit for you for certain workloads. Then you would want some open source flexibility and you might want to go into EKS. So those were the challenges often we see in the real world, especially during the migration phases, POC phases where a lot of our customers were asking, which one’s the right fit for me? Because even in the containers world, there were 17 ways to deploy containers on AWS. So which one’s the right one is a million dollar question. Exactly, that’s a good point. Thanks.
The Monolithic Architecture Problem: Coupling Business Logic with Infrastructure
So this is what we’re going through today. I expect to cover this topic in the next 20 to 25 minutes maximum because we also have deployment and need to show this is working. But let’s start with identifying coupling between business logic and infrastructure. Let’s start with the monolith. This is a very traditional monolithic application here. It’s using FastAPI, so you have a server. You have an app that implements some routes, and then you have your model or your logic to save your users.
But in this case here, you see that save users is not doing only the purpose that it was supposed to do, which is validate if your user is valid and save the user. This is doing a lot of other things. Does someone want to point out some critical points here in this monolithic architecture? If not, I’ll explain. It makes some assumptions about your infrastructure that you are using. It’s saying that you must use DynamoDB, and it also makes the assumption that you must use the AWS SDK, and also you must use this name of the table, users table, which would be users. So this is not doing what it’s supposed to do, which is implementing the logic to save the user. This makes assumptions and it brings you a cost.
So let’s calculate how many problems we can have here. First of all, we have a mix between logic and infrastructure, and you also have a hardcoded database. And if for some requirements you need to move to a different database tomorrow, you probably need to rewrite all this code here. You probably need to rewrite not only users but also orders,
and you can have shipping, email, or any other services that you are running here. This is propagating environment variables everywhere. Of course, you can say, "No, Alejandro, I can create helper functions to help with that," but you are probably not doing the best thing to solve this problem. Let’s see how this application is running here.
I hope I saved my session. Okay, I have this script here, save users. This is working. This is the magic here. Can you see my screen? It’s good. You see that I made an invocation to this endpoint here, users, and you see it’s saving the user. It’s simple. If you want to start very simple, go there and start with a monolith and save your users. But now we start to add some new requirements. In this application here, users, we needed to save this in a DynamoDB table. But across different environments, for example, if you have ECS or EKS, you are probably not saving the table name in the environment variable for various reasons, but you are saving this in SSM, for example.
So let’s start with simple code here on how to improve this one. I will import boto3. I will create a new session here for boto3 SSM, and now I can make some new assumptions. For example, runtime. You can get the runtime here. It can be ECS or Lambda. So I set the default to Lambda, and I have static ECS. If runtime Lambda, sorry. Oh sorry, sometimes my keyboard doesn’t work. So you’re getting this from the environment variable. If runtime ECS, I want to get this from my SSM because of some requirements you can have there. SSM client get parameter, var slash users slash table. If I’m not wrong, this is the syntax, right? You get this. And if this is EKS, you can have a different way to get this one. You can get the name, for example, from a file. I don’t know. Let’s just pretend we are doing some decisions here based on your application, and then table name equals file read. So we implemented this new flexibility, so this is good. Sorry, sorry, sorry.
This is good because we have now flexibility depending on the environment, but it’s true that you needed to do that across all the services that you have in this monolith. So if you have ten entities or twelve or whatever, you needed to do that everywhere again. You can have helper functions to do that, but that’s not the point here. The point that I’m trying to make here is, how can I have a clean architecture, good code that regardless of the environment that I’m going to, regardless of the code that I’m writing, I make it easy to switch between environments?
And this also introduces another issue because here we are literally hard coding the way that you get the environment variable. How can I test this? How can I inject, for example, a boto3 client here? I need to mock. And sometimes my tests will be more complex than my code because now you need to mock everything. You need to create a lot of tests, or you need to run local tests with moto or any other library to fake a DynamoDB client. And if you could inject, for example, a DynamoDB client here or any other instance of a client that you want to do that. So this is the monolith. That works. This is working.
Microservices Approach: A Partial Solution to Platform Portability
You saw that I could save the users. I can do the same with orders, but we need to evolve that. We need to make this more flexible. We need to make this better. I’m pretty sure that some of you are thinking, "Okay, Leandro, I think microservices can solve this, right? I think microservices can solve this problem here." Microservices can solve part of this problem because if you implement a purely microservices architecture, you are still not solving the problem of how to transport and deploy this across different compute environments. You still probably have some components in your monolith, in your microservices. I’m not saying that you can’t combine Clean Architecture with microservices. For sure, of course you can. You can combine them, and I see a lot of customers doing that. But the point of this talk is how can you make it easy to transport and go to different architectures.
So let’s take a look a little bit at the microservices way that you wrote this. We still have something that’s very coupled here. If you look at the models, we have some initial validation. If you look at the controller, we have the repository here that uses a repository. We are implementing an in-memory user repository. But also, if you needed to refactor this because you are going from Lambda to ECS, this is also not solving the critical problem that you want to solve here, which is how can I make it easy to go to different architectures? Anyway, that’s where we’re going. I’m just showing the microservices here for the purpose of showing microservices.
Does anyone have any questions here or something that you want to add to this talk? No? Some of you are using microservices or Clean Architecture in your workloads? And you combine both? Okay, nice. And is it solving your problem in deploying to different architectures or not? Most of the deployments are in ECS? Okay, so it’s multiple microservices. Nice. But is your code ready, for example, to go to another platform? In the sense, is your code ready to go, for example, to ECS or to Lambda, to deploy in Lambda or ECS? Okay, nice. That’s good.
Clean Architecture Fundamentals: Domain, Ports, and Use Cases
So let’s jump a little bit into Clean Architecture. Probably a lot of you are familiar with this architecture here. This is not new to anyone, but here we basically have three different components. We have the application, which is your pure business logic. Probably sometimes when I’m writing Python, this is literally your Python code here. You don’t need to import any dependency here. Sorry, let me close here. You don’t need to import any dependency here. This application, this code here, should not be aware of what the underlying database is because it’s your business logic. And you have it here. Sorry, it’s the domain. Sorry, here it’s the domain that I’ll talk about. Sorry, I opened the wrong folder.
The only thing that this code is aware of is I have a user. I have a user. I don’t care where I save the user, but I need you to validate. The user must have at least two characters, and the user must have a valid email in this format here. You see here that you are not coupling this one with the SDK or boto3 or whatever it is. We are just doing the pure business logic that will help us to move to different architectures.
We have the application folder here that uses Ports and Adapters. I agree that some people disagree with Clean Architecture and Ports and Adapters, but I see, in my personal opinion, a lot of overlaps between those architectures. And also, you can combine both. You can combine and have the benefit of both. The core of those architectures is how can I solve the problem of having my application, my business logic, and my underlying dependencies isolated. But of course, you can combine and you can have benefits for both.
So here we have the Ports. The Ports is basically a contract. This is a promise. What I say is I have a user. To create a user, I expect an object type user. To find a user, I expect an ID that is a string. And to delete, for example, I expect another ID. If you see here, this is not making an assumption about the database that this is saving.
This is only taking care of the logic. This is only taking care of the contract that everyone who wants to save a user must respect, whether they’re in PostgreSQL, DynamoDB, or any other database. They must respect this contract here, and then just let me get my paper here to see if I’m solving this problem.
Oh, but before that, let me come back here and check this. So we went through the monolith and you have this one. The second part here is the application, the port is here. Sorry, I explained that, and we have our use cases. Our use cases are literally, let me open here, the ports. Oh sorry, sorry, the use case here. I have here my use case. I have in my contract that I define what I use. I have in my ports for that, and this use case is whatever you implement, the CreateUserUseCase class. I will expect a user repository and you expect it to execute method.
At this point here you see a new pattern in line 11. You see that you are not only expecting a dependency, but you are doing the inverse dependency, the dependency inversion. Because now when you go to your tests, for example, you can easily mock this one. You can easily mock what a user repository is, because when you’re doing unit tests or functional tests, for example, you are not necessarily concerned if this will save on your database. This is more for an end-to-end test. This is more for an integration test. But when you think about unit tests, you think about whether someone that’s consuming my API or someone that’s implementing my method is respecting the contract. That’s the general idea here, and it makes it easy to write some unit tests.
So just let me go through here. We understand what a port is. We understand what the user is here. We also went through the domain here to understand what the operation is that you’re expected to do when saving a user. Does someone have any questions here or anything that you want to ask about this architecture? No, okay.
Implementing Adapters for Multi-Platform Configuration Management
But let’s now, you remember that in the monolithic architecture, we had to deal with the challenges that we now needed to discover the table naming based on the environment. And how can you do that in a clean architecture without going through every single service and implementing the same logic? How can I do that in a more clean way?
First of all, let’s take a look here at the script that is following. Let’s first create a new file that you call ConfigPort. This will be our port, our contract. So everyone that needs to get an environment variable to decide the name of the table must respect this contract. I agree that someone that is doing Python is probably using protocol instead of abstract base class, but let’s do that. So I call that the ConfigPort. Let’s see the name. Yeah, I implemented this as an abstract class and I defined two methods here. That means everyone that needs to get an environment variable to decide the name of the table must respect this contract.
So I define this as an abstract method. I have a method to get here that expects itself and the key of the table. It will be a string, and then you just pass because this is literally an interface that I want. I don’t want to implement anything. And then let me implement also the get_table_name, the key, also the entity. Sorry, that’s the name of the table here that you save. It’s a string. I will explain later what I put here when I complete this implementation.
So first of all, I defined the contract that you need to respect. So everyone that needs to get from an environment variable to decide the table must respect this contract. But we need to now create our adapters, because I mean, at least for me, one of the benefits of clean architecture’s ports and adapters is to make this flexible. So how can I make this flexible based on the environment that I define, and how can I make this flexible enough to add a new environment if I want?
So let’s create our first adapter for that. Let me press here and create a new folder called Adapters, and I will call this new file lambda_config_adapter.py. First of all, I need to import from application.ports import ConfigPort.
Now I define a class here, LambdaConfigAdapter, that implements this ConfigPort. That means I need to respect that contract. I need to implement those methods to get my environmental variables. Now I’ve implemented the two methods that are required by this class. I have the get method with the key parameter. I need to implement the same interface, otherwise Python will complain about that. Since it is Lambda, I literally want to get my environment variable from the table name for environmental variable, and I also need to implement the get_table_name method. So the key parameter, because this is the name of the key, it’s like entity, that’s the same, and then I get the environment variable. I need to annotate this one with str.
So I’ve implemented my first adapter here. After implementing another one for ECS and another one for EKS, for example, we’ve modified the application to factor this regardless of the environment. The application will be able to load the default configuration. So let me create another adapter here that is for the ECS one, called ecs_config_adapter.py. For the sake of speed, I will copy the same code from the Lambda adapter. It’s ECSConfigAdapter, but you remember that when using ECS, I don’t want to get my environment variables from the table name from the environment variables. I want to get this from the SSM, so I import boto3. I create the SSM client.
Now I just need to change this one here to ssm_client.get_parameter, and then I get the key. Also here. So you see that different from the monolith, we are not hard-coding this in the logic of save users. We are starting to abstract this. We have adapters like the Lambda adapter and the ECS adapter, and now we start making some decoupling in the application.
I could do the EKS here, but for the sake of time, since I have to demonstrate also the deployment, let’s move directly to how can I factor this in the application. Let me show you the application here. How can I make this flexible to deploy either in Lambda, ECS, or EKS? This is a basic FastAPI application. Those who are writing code in Python are familiar with this code here. This is a basic FastAPI application, nothing different, nothing super magic here, just a basic application. But now we need to think a little bit. Lambda expects a handler, while ECS and EKS are like normal containers that just expect an entry point. How can I make this application aware of the environment and easy to deploy in different environments?
So if I go to Lambda here, you see that I imported from Mangum. If you see here, I literally imported from the FastAPI. So I’m preparing my API to be deployed in Lambda. For those who are coding in Python, they probably know what Mangum is. It’s a proxy in the middle that transforms any FastAPI or Flask API into a Lambda handler.
But you can see that I only have one API. I only have one app here. I don’t have an app that uses specific code for Lambda or another app that uses specific code for ECS. This is only one code. This is the benefit and, from my opinion, the beauty of clean architecture. I’m not saying that I’m not using monolith. I have some applications using monolith either in Lambda or ECS and containers. This is what it is. The world we are here because the world started with monolith. So I’m not complaining about that. I’m not saying this is not good or not.
But now we need to make our ports, we make our adapters, but now how can you make the changes so that the application will be aware of all the changes that you make? So for that, let me copy this one here because we are probably running out of time. You can go through this file. For the first time here, let me remove the EKS here, sorry, for the sake of time. We are adding a file that, when you’re creating the application, this application that I’m showing here. Before creating this application, you need to make a factory to get the configuration.
So I just keep the example here. You need to make a factory to be aware of the environment. I have now different parts of my code here super clear. I have my port that’s my contracts. I have my adapters that depend on the environment. We do different actions. We get from environment variables, we get from DynamoDB, and I have my application. So it’s time to make this happen in my application. Different from the monolithic architecture where I make the decision here and I make a lot of code here that cares about the architecture, I just need to import now my factory here, my factory clients here.
So from infrastructure.factory import, what’s the name of the class, CreateHappiness. You see here, so I create my dependency here. And now when I go into my application here, oh sorry. Yeah, it’s my application here, my first API application here. Before I created, oops, I’m sorry, I did a mistake here. So after I created my app, I can literally do that. So factory, sorry, what’s the name of the class, CreateHappiness. So now this variable will contain the return of, it’s right, it’s right, will contain the return of the CreateHappiness. That’s a factory that’s using our adapters. Does it look complex or not?
Does someone have a question or not? That’s okay. Probably some of you are familiar with that, right? Okay, so that’s why I’m like, okay, go. Can you get a mic please? No, there is a hand mic. There is a hand mic. Sorry, it’s better. Oh, the mic’s here. Is it on? It’s off. Try to check. Okay, this is great. Thank you. I learned about clean architecture today when I came to your session, but the pattern I’m noticing here is, while it’s great, we’re sort of separating everything and making it agnostic of the runtime and the infrastructure in terms of coding, the monolith versus where we are now, we had to add tons of lines of more code and just replace five or six lines of code. So what’s your take on that?
Yeah, that’s a good point. Every decision that you make comes with some pros and downsides. That’s true. To keep a monolithic architecture, you have probably problems to scale because now you need to scale at once and then you need to go at once. But when it’s coming to code, at least for me, maintaining a monolith architecture, every decision that you make, I feel that I need to rewrite a lot of that application, a lot of code in that application.
We are not trying to say don’t use the monolith, use only clean architecture, but every decision that you make comes with downsides and benefits. When you maintain a monolithic application, I feel that every single change that I need to make, if I needed to make a change to, for example, not saving to DynamoDB anymore but saving to PostgreSQL instead, it’s going to create a lot of refactoring. I need to refactor a lot of stuff. Whereas in this architecture, of course this is a case where you’re getting from environment variable, but it could be like DynamoDB and PostgreSQL. I will show some database here, some files here, but my take on that is depending on how much you need to refactor this application, you have benefits on both sides to do with monolith and clean architecture.
Again, this is hard to say like you go to clean architecture, go to monolith, because I don’t know your use case completely. But for sure, if you want to start a simple application, go to monolithic, create a single file, deploy that, especially in Python where you can create a startup with just one file. That makes sense for much larger applications. I think this would probably be easier, especially if you are building an enterprise grade application that will be maintained not only by you but maintained by a team that can collaborate in different parts of the application. Probably makes sense, but I agree with you. You had like at least I don’t know, 50 files on that. Yeah, Leandro, I’ll probably show that.
So to carry that particular topic, pretty much that question applies to infrastructure as well based on your team size, design choices, and your knowledge curve, like whether it be orchestrators that we’re going to look at or the code. It varies. There’s no one size fits all, but what we are showing for the purpose of this code talk is that if you have an architecture like this, it’s easy to port across different compute options. Yeah, and the only, the last change that you need to do is, let me take that question. Sorry, go ahead.
Live Deployment Demonstration: EKS and Lambda in Action
So yeah, we are here in a demonstration. Of course, we have the same code but now in the side machine and we will deploy that to different compute options. One sec. What happened to the SSH here? I’m sorry, this is your laptop. It’s mine. I’m not getting the terminal. I didn’t close it. They didn’t see me. This is what happens, but it’s okay. Yeah, we’re going to get it right. I need you to, when you’re clicking your terminal, it’s not working. What’s going on? All right. I think it’s the internet, yeah.
Here you can see I have in this deployment folder I have both ECS, EKS, Lambda, three folders that are here that constitute different architectures that we’re talking about. Each one has these shell scripts that we made for this demonstration purposes that deploys the infrastructure. Some of you have used EKS here before. As you know, EKS takes a few minutes for the control plane to be up. So ahead of this presentation we have just created the control plane about 40 minutes ago. So we’re just going to go to deployment part for the EKS portions. And this code will be available for you to take home after this presentation. We’re going to show you the QR code in the end as well.
That said, first and foremost, here I have this build and push shell script, which is essentially building the container image. I’m not going to show the ECR login part here, but as you can see here, it’s going to take the Dockerfile that’s in the root location, which is essentially this one that’s copying the base clean architecture code that Leandro just coded here into the root and essentially running it at the entry point.
When I execute this particular shell script, it’s going to create an ECR repository, and within the ECR repository, it’s going to build a multi-architecture image that suits either AMD64 x86 architectures or Graviton-based architectures. Now that we have the image ready, this single image, single app can be used to deploy on ECS as well as EKS. So that’s what we’re going to show here on the EKS world first.
To start with, on the EKS world, we have this deployment script that essentially creates a deployment manifest here. It looks for the EKS cluster being alive, which I’m going to show you, but once that is done, it essentially goes and creates this deployment manifest. In this deployment manifest, we are referring to the same image that we just built, and we’re going to say we’re going to have Python unbuffered one to just have standard out logs live tail. Then we create a Kubernetes service that essentially exposes it via load balancer so that we can test this application live.
It’s going to look in the Kubernetes existing config file. It sees that there is an EKS Auto Mode cluster that is available with two nodes, as you can see here. It has been created a couple of hours ago, 116 minutes is its age, and on this we are deploying the clean architecture app that we just built. Once it is ready, we can go into the EKS cluster here to see whether the infrastructure is coming up.
On my EKS cluster here, if I go to the resources tab, I can see that the pods are getting created, and they are leveraging compute that is provided by EKS Auto Mode. Right now, there’s a node that just came up, so once this node is there, it’s going to use that node to create, it’s going to use those nodes to host the pods. Once the pods are ready, we can see that there are two of them that transition to running status.
Since we created the Kubernetes service as well, the service would be creating a load balancer behind the scenes. If you go to the load balancer console, here we can see that this particular NLB is in provisioning status. It usually takes about a couple of minutes for this NLB to become active. Under the target group, we can verify that the targets are registered because we are creating the service based on the label selectors that the deployment has, and it is currently initial under registration process because the load balancer itself is in provisioning state.
Within a couple of minutes, we should be able to see the load balancer become healthy. Once the load balancer becomes healthy, it will provide us with the application that we have seen earlier. It will provide us a couple of APIs, a users API for this e-commerce app that we are discussing to have our users inputted, and then we have orders API for those users with a custom ID. The same logic that he was showing you in the code, that user ID should be there before an order is processed, we follow the same logic here on this application. When you hit the API, it will ask you for those as inputs to show you your application. Now it’s saving to DynamoDB as well, but if you want to make a change, for example, to saving to PostgreSQL, you just need to change the adapter for that.
Yeah, our load balancer is about ready now. No, that’s not the issue. It just takes time for registration to complete. You see, only one became healthy. Yeah, but I think you accessed HTTPS, no? You need to use HTTP. Right now, the FastAPI, we just use Mangum to create, to expose the FastAPI in Lambda as a handler, but in EKS and ECS, we’re just creating the plain HTML file. Yeah, and in deployment, in the deployment file, you could make a decision, for example, that the order services are going to be deployed in Lambda because, I don’t know, because the scaling is different or some requirement.
And you can make, for example, the users going to be deployed in EKS. Right. And let’s say, for example, some user ID, and let’s say Sai is the username, and that’s not my email, don’t worry, and let’s say that’s the email and we have the user that added the user ID is randomly generated for the code that was created by Leandra earlier and only with this user ID identifier will the backend system be able to process the orders.
So if we go to the orders, we should have it ready. Internet from EKS or network. No, no, it’s not a party issue, it’s the network. So in your local laptops, when you deploy, it shouldn’t take this long as long as you’re not on re:Invent network. So right now this is happening. I have demos as well that I can show you which I recorded, pre-recorded, hoping that this would happen. And now that we have this, I’ll just, while this is loading, I’ll skip in the interest of the time to Lambda and show you how we have refactored the same to suit Lambda as well.
Okay. In Lambda world, we have this deploy Lambda with ALB shell script. Essentially, as the name says, we’re going to deploy, we’re going to build this container image out of this code, and we’re going to deploy it within Lambda and expose it with Application Load Balancer so that we can hit just like how we’re hitting now on the web browser for the verification purposes. Otherwise, you could also use a private Lambda function like deploy it within the VPC and use AWS IAM auth to have your SigV4 authentication as well to access it.
And in this case, one small change that happens is since there is a Lambda handler like what Leandra was showing you earlier, unlike container platforms, ECS or EKS, you cannot use entry point since Lambda is execution based. It’s one-off execution. You need a handler and you need a special Dockerfile that essentially copies the handler into the task root and leverages the Lambda Python base image where the function would be running.
So the first thing that we would do is do this deployment and the step it would do is essentially it would look at that Lambda image. If it is not there, it is going to create that image out of the Dockerfile that I just showed you. It’ll take a couple of minutes. It’s about 100 MB image, and once the image is ready, it then goes ahead and creates the Lambda execution role responsible required for Lambda to function and once the role is available.
It will then deploy it into the Lambda function like you see here, create new function. As you can see here, it’s the same variables that we have seen in EKS world for tailing the logs, and it’s on the x86 architecture that we are deploying and the function would be in creating state. Once the function is active, it will then look for the networking configuration to create a load balancer with which you can front that and access this function.
Now this part is not necessary. We have done it to ensure you will be able to see it like in action. This part is not necessary. It’s just that once you have Lambda function deployed, you can access it from within your private IDs that have direct connection as well.
So let’s go to. With that, give me one second. It’s almost getting ready. Okay, so here we have this orders page that we loaded earlier, finally load up. And once you have this order ID user ID, you can then have an order like echo one, and it would show up as this particular user requires
they ordered this Amazon Echo to the backend inventory system. This is all on EKS now as we have seen. While this page is loading, we have deployed a Lambda function here that essentially one minute ago we deployed this function and fronted it with the Application Load Balancer.
If we go to the Application Load Balancer again, it should be in provisioning state and should have the registration happening to the target. You can also have the same load balancer with different target groups. We just did that for the demonstration purpose. Here you have this same exact Lambda function that is being registered, and as your versions change, all that happens is a new registration if you have a different version of your function that is deployed. Once it is healthy in a couple of minutes, we should be able to see that the same exact application would now be running on Lambda with the same functionality.
Application Load Balancers do take less time than Network Load Balancers to do a DNS propagation. After it becomes active, it should be much quicker than a traditional Network Load Balancer. For sure we are not deploying ECS because we just have a few minutes. Yes, that’s okay, and then you move to the takeaways. We have one container platform and one serverless platform that we were able to demonstrate, but the code is a takeaway for you to try it out at home. If you find issues or you want to provide any feedback, feel free to create a GitHub issue, and it’s maintained by us both, so we can take a look at it.
Key Takeaways: Achieving Application Portability Through Clean Architecture
This code will be available in the GitHub if you want to take a look. This one’s the same. It’s going to take a while because of the internet. Okay, so same code, same application in two different platforms. Exactly the same thing, that same behavior that you were seeing earlier. You can see here, this is the Kubernetes application. This is the Lambda application. Both are the same exact application running in different compute options because we have the adapter architecture that Leandro showed. I think that’s a good segue for the last takeaway slide.
I think that sort of sums up with the takeaways. The first and foremost pillar of basically transforming this application to suit different architectures is separating your business logic from that infrastructure logic, which is going towards the clean architecture approach. Once you have separation of concerns separated, then it is more of a configuration-driven deployment using the runtime that Leandro was showing like EKS, ECS, Lambda. You can have the same exact application function in different places.
Then there is implementation which is during the infrastructure phase. As an infrastructure platform team, they can still adhere to their IaC configurations that they are doing. During the runtime when you’re actually deploying based on the environment variables passed, the application will choose the platform where it is heading towards. The payoff would be portability out of it once you have these steps implemented.
I think with that said, this software architecture revolution gives you the flexibility from the operational standpoint, which is the message that we want to drive with this session. Despite some hiccups, hope you thank you for bearing with it. I hope it was good for you guys, and this is the QR code where you can get this application. Thanks a lot if you want to try it out. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.