So far in our MLOps journey, we have created ML research and ML model-building pipelines as well as saved them in serialized form. Saving models this way allows us to now take that serialized ML model and load it into an application.
We will now take the saved ML model and deploy it to an AWS lambda. Lambdas come in two flavors, Docker and file-based. In our case, we will discuss briefly the file-based option only and focus on the Docker-based deployment. My preference for choosing the Docker-based approach is simply the ease and smooth deployment process.