MLOps Basics Part 4

Brian Lipp
6 min readJul 26, 2021

So far in our MLOps journey, we have created ML research and ML model-building pipelines as well as saved them in serialized form. Saving models this way allows us to now take that serialized ML model and load it into an application.

We will now take the saved ML model and deploy it to an AWS lambda. Lambdas come in two flavors, Docker and file-based. In our case, we will discuss briefly the file-based option only and focus on the Docker-based deployment. My preference for choosing the Docker-based approach is simply the ease and smooth deployment process.

Containers

Docker is a virtual machine-like technology that allows the engineer to create a snapshot image of the container (think of a virtual machine). Containers are stateless or ephemeral technologies, meaning every time you restart the container your data saved inside the container is lost. Docker images are created from Dockerfiles, and Docker containers are running instances of Docker images. We will be building our images and then saving them in AWS ECR.

To install docker follow this guide.

Useful docker commands

Here are a few commands I find I use often when working with Docker.

  • Listing running containers
docker ps
docker ps -a #include stopped containers
  • remove unused images from your computer
docker system prune
  • Turn your Dockerfile into a Docker image
docker build -t <tag> <path to Dockerfile>
  • Running a Docker container from an image
docker run -it <tag> <local commands to run>
  • Executing commands on a running Docker container
docker exec -it <id of running container> bash

Dockerfile

This Dockerfile expects the pickled model you created in the previous article to be in the model folder and the name to be model.pkl .

FROM amazon/aws-lambda-pythonCOPY app/lambda_function.py   ./
COPY model ./
COPY Pipfile ./
COPY startup.sh ./
RUN yum install -y gcc \
&& pip3 install pipenv \
&& pipenv install \
&& pipenv run pip3 freeze > requirements.txt \
&& pip3 install -r requirements.txt --no-cache-dir
CMD ["lambda_function.handler"]

Pipefile

[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
awslambdaric = "*"
pylint = "*"
sklearn = "0.24.2"
pickle5 = "*"
[dev-packages]

app/lambda_function.py

import pickle5 as pickle
import numpy as np
def handler(event, _):
"""
The main entry function
:param event:
:param _:
:return:
"""
loaded_model = pickle.load(open("model.pkl", 'rb'))
X = np.array(event["data"])
prediction = loaded_model.predict([X])
message = {
'prediction': float(prediction)
}
return message

AWS Lambda

The heart of our deployment will be the AWS serverless technology Lambda. The term Lambda comes from Calculus, and generally speaking, can be understood as ephemeral groups of code. When triggered AWS Lambda will take our image and run a container that will make a prediction using our ML Model. A Lambda allows us to run code without forcing us to manage any infrastructure.

Make sure you have an AWS account if you do not have one follow this guide.

AWS CLI

We will be using the AWS CLI to interact with AWS. The CLI will allow us to make changes to our AWS account, without using the GUI. In a future article, we will automate this process.

AWS Documentation

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

note: the CLI requires you to set up your credentials file.

Creating the execution role

When our lambda is triggered, it will need an AWS role, which will be used to grant access to the lambda. The role will need a policy document attached to it, in our case, this will be minimalistic.

Here we create our role and simple policy

aws iam create-role --role-name lambda-ex --assume-role-policy-document '{"Version": "2012-10-17","Statement": [{ "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'

The following should be very similar to what you see as the output.

{
"Role": {
"CreateDate": "..",
"RoleName": "lambda-ex",
"Arn": "...",
"Path": "/",
"RoleId": "...",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
}
}
]
}
}
}

next, we will attach our basic AWS Policy Document to the role.

$ aws iam attach-role-policy --role-name lambda-ex --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole

AWS ECR

Once you have built your Docker image you must send it to a registry for central access. AWS provides a paid service that we will be using called Elastic Container Registry, ECR.

Create a new repository to store our images in ECR

aws ecr create-repository --repository-name my_tests/lambda-mlops-model

You must log in to AWS ECR and use our AWS credentials whenever you want to push an image to the registry.

aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <URI from ECR>

Example

aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin 112437402463.dkr.ecr.us-east-2.amazonaws.com

The time has come, we will now build our Docker Image!

docker build -t dev/lambda-mlops-model .

If you want to try out your image you can run the container and then Bash into the container

docker run -it dev/lambda-mlops-model bash
docker ps
docker exec -it <id> bash

Our next task is to send our image to ECR here we will tag the image with the “official” name and then send it to ECR. Keep in mind your login may timeout so you might have to log in to ECR again.

docker tag my_tests/lambda-mlops-model:latest <URI from ECR>:latestdocker push <URI from ECR>:latest

Example:

docker tag my_tests/lambda-mlops-model:latest 112437402463.dkr.ecr.us-east-2.amazonaws.com/dev/lambda-mlops-model:latestdocker push 112437402463.dkr.ecr.us-east-2.amazonaws.com/dev/lambda-mlops-model:latest

Create lambda

We will now create the lambda in AWS. We will pass the ARN and the ECR docker image.

aws lambda create-function --function-name mlops-model\
--runtime python3.8 --role ....lambda-ex\
--code=<URI from ECR>

example

aws lambda create-function --function-name mlops-model\
--runtime python3.8 --role arn:aws:iam::112437402463:role/lambda-ex\
--code=112437402463.dkr.ecr.us-east-2.amazonaws.com/my_tests/lambda-mlops-model:latest

Invoke lambda

Now that we have a deployed lambda, we can test our new lambda by triggering it with test data.

aws lambda invoke --function-name mlops-model --payload '{
"data": [
" 4.38600006e+01",
"7.31360000e+06",
"2.59917778e+00",
"-9.93224908e-01"
]
}' output.txt

lambda without docker

Although we focused on AWS Lamda deployment with Docker, let's cover some key points with deploying our model without Docker.

  • Use Amazon Linux

It might seem like a hassle to create your Python lambda zips in a Docker amazonlinux container, but it saves you a ton of effort down the road. Why? AWS Lambda uses Amazon Linux, and there can be missing C libraries if you create your zip file in Ubuntu, Windows, or even Mac OS.

Here we run the latest version of the Offical amazonlinux container from Docker Hub. I’m also mounting my home directly as a volume. Replaces my home directory, with your relevant directly. The mounted volume will allow us to create the zip file in the container and access it easily outside docker.

docker run -d -v '/home/blipp':'/srv' amazonlinux tail -f  /dev/null

now we will check to see the id of the running container and then Bash into the container

docke psdocker exec -it <id> bash

Now that we are in the container we will install all needed libraries, create the needed folders. We will be using pipenv to do our dependency resolution, but in the end, install the libraries in a specific folder using pip.

cd /srv/....

yum install -y gcc gcc-c++ make git patch openssl-devel zlib-devel readline-devel sqlite-devel bzip2-devel libffi-devel zip
curl https://pyenv.run | bash
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init --path)"
pyenv install 3.8.8
pyenv global 3.8.8
pip install pipenv
pipenv install
pip freeze > requirements.txt
mkdir lambda_deployment
cp -r app lambda_deployment/
cp -r model lambda_deployment/

Now we are ready to create our zip file


pip3 install -t ./package -r requirements.txt
cd package
zip -r9 ../../lambda_deployment_function.zip .
cd ..
rm -rf package/
zip -gr ../lambda_deployment_function.zip *
  • Include your shared dependencies in a layer.

If you have common Python libraries used across Lambdas, then creating an AWS Lambda layer could solve some complexity.

curl https://pyenv.run | bash
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init --path)"
pyenv install 3.8.8
pyenv global 3.8.8
pip install pipenv
pipenv install
pip freeze > requirements.txt
mkdir -p layer_deployment/python/lib/python3.7/site-packages
pip3 install -t layer_deployment/python/lib/python3.7/site-packages -r requirements.txt
cd layer_deployment
zip -r ../layer_deployment *

We have now completed our MLOps pipeline ending it with taking our ML Model into “production”. In my next article, I will cover CI/CD using GitLab and address any lingering topics.

--

--