Deploying a Machine Learning Model to AWS SageMaker Complete Guide - PART 01
dev.to·1d·
Discuss: DEV
🦙Ollama
Preview
Report Post

So today we will see how to deploy a custom made machine learning model or any other model you find on Hugging Face directly on the compute powerhouse that’s AWS using severall of their services, mainly AWS Sagemaker, API Gateway, Lambda Functions.

AWS Sagemaker, like the name suggest it’is the "sage maker", i.e the one creating sages, in our case the one creating AI. API Gateway, a way for our API to get out there :), it will be mainly used for stuff like taking our deploying model out there for others devs to play with via API calls Lambda function, will serve as the little policeman deciding which road each request should take, think of it as a control point where incoming calls will be filter out and redirected or formatted to what the AI model expect. The beautifull part is th…

Similar Posts

Loading similar posts...