Containerizing Python Apps for Lambda

Pieter van Nordennen
← Slim Blog

AWS Lambda functions define the serverless architectural pattern, wherein you deploy single functions that encapsulate a single operational logic in a cloud environment. AWS Lambda enables this by creating a development sandbox environment (backed by S3) where developers can upload their function code and connect those functions with events or triggers.

Deploying Lambda functions using the default sandbox runtime isn’t perfect, however. There are certain limitations, like deployment size, library support, and supported application runtime versions. Thankfully, you can now deploy AWS Lambda using containers.

This technical article will explain the benefits of containerized Lambda applications and show you how to deploy images to AWS Lambda. We’ll conclude by pointing you to some helpful reference material as well as some next steps that will help you take things even further.

Let’s get started.

Advantages Over Traditional Lambda Applications

While traditional Lambda functions restrict your choice of application runtimes, containerized Lambdas avoid this limitation through the use of container technology.

In order to support node v18 (released in April of 2022), for example, developers can use the official node:18.1 image, which has been heavily used in development and production. With non-containerized Lambdas, this would not be possible without writing a custom Lambda runtime, since the highest supported version is v14. Likewise, they only support Python 3.9 and below, but the most current version of Python is 3.10.

In addition, while the deployment upload size was traditionally restricted to a maximum of 50MB, container images allow for sizes up to 10GB, which (although not recommended) allows for bigger deployment workloads. Of course, if you have containers that size that you need to deploy, it would make sense to slim them first using DockerSlim so that you can avoid paying more to move larger images across machines.

If your organization has already invested heavily in the container ecosystem, it is recommended that you switch to containerized Lambdas. That way, you can gain more fine-tuned control over packaging governance and security as well as better reusability when it comes to system components and existing base production images, which will come in handy for all kinds of workloads.

Next, we’ll show you how to set up a containerized Lambda application using Python. Then, we’ll walk you through the steps for deploying it to AWS.

Setting Up a Containerized Lambda Application in Python

Below, we will explain how to create a containerized Lambda application in Python. You could also use an alternative runtime like Node.js or Java.

First, there are a few prerequisites. If you have not already done so, you will have to:

When creating a containerized Lambda application in Python, you have a few options:

  • You can use the base Lambda image (which only supports certain versions of Python).
  • Or, you can use your own custom image. This method allows you to use a version of Python that isn’t supported by the base Lambda image, and it also enables you to add steps to the Dockerfile.

No matter which type of image you choose, you also have the option of either creating the containerized Lambda application from scratch or using the AWS Serverless Application Model (SAM), which brings some automation to the experience.

Using the Base Lambda Image

First, we’ll show you how to use the base Lambda image to set up a containerized Lambda application. Start by creating the following files in an empty folder:

_app.py_
import os
import os
import json

def handler(event, context):
 version = os.environ\['APP_VERSION'\]
 return {
  "statusCode": 200,
  "headers": {
   "Content-Type": "application/json"
  },
  "body": json.dumps({
   "Version ": version
  })
}

Then, create a Dockerfile with the following contents:

Dockerfile

FROM public.ecr.aws/lambda/python:3.9

COPY app.py ${LAMBDA_TASK_ROOT}

COPY requirements.txt
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"

ENV APP_VERSION=1.0.0

CMD \[ "app.handler" \]

This image uses the base Lambda image for Python 3.9 (which is currently the latest version supported by AWS). It copies the application code to the LAMBDA_TASK_ROOT env path and installs the Lambda dependencies. It also exposes a new environment variable, APP_VERSION, and then starts the application pointing to the handler function.

You can test this application by building the image and running the container:

❯ docker build -t hello-world:latest .
❯ docker run -p 9000:8080 hello-world:latest

Then, in a different shell terminal, you can use curl to send a request to the Lambda function:

❯ curl -XPOST
"http://localhost:9000/2015-03-31/functions/function/invocations" -d
'{}'

{"statusCode": 200, "headers": {"Content-Type": "application/json"},
"body": "{\\"Version \\": \\"1.0.0\\"}"}%

Note: The URL path contains something that looks like a date (2015-03-31), but it’s actually a version string.

Using a Custom Image

You can also use a different version of Python (including the most recent version) by creating a custom image. To do that, you want to use the following Dockerfile:

Dockerfile

ARG FUNCTION_DIR="/function"
FROM python:3.10-buster as build-image
RUN apt-get update && \
 apt-get install -y \
    g++ \
    make \
 cmake \
 unzip \
 libcurl4-openssl-dev

ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR}

COPY app.py ${FUNCTION_DIR}
COPY requirements.txt .
RUN pip install \
 -r requirements.txt \\
 --target ${FUNCTION_DIR} \\
 awslambdaric

FROM python:3.10-buster

ARG FUNCTION_DIR
WORKDIR ${FUNCTION_DIR}

COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}

ENV APP_VERSION=1.0.0

ENTRYPOINT \[ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler" ]

Here, we included several other steps to install Python 3.10 on Buster Debian and configure the FUNCTION_DIR, which will host the code and then use a multi-stage build to move the final function and run it under the AWS Lambda interface for Python.

It would be a great idea to test this locally before you run the image. To do so, you can install the AWS Lambda Runtime Interface Emulator (RIE) locally and mount it as a volume:

❯ mkdir -p \~/.aws-lambda-rie && curl -Lo
~/.aws-lambda-rie/aws-lambda-rie \
https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/
latest/download/aws-lambda-rie \
&& chmod +x \~/.aws-lambda-rie/aws-lambda-rie

Then, build and run the custom image using the following commands:

❯ docker build -t hello-world:latest .

❯ docker run -d -v \~/.aws-lambda-rie:/aws-lambda -p 9000:8080
--entrypoint /aws-lambda/aws-lambda-rie hello-world:latest
/usr/local/bin/python -m awslambdaric app.handler

Testing it should be a breeze:

❯ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d
'{}'
{"statusCode": 200, "headers": {"Content-Type": "application/json"},
"body": "{\\"Version \\": \\"1.0.0\\"}"}%

Now we have the flexibility to bundle our own Python runtimes. Next, we’ll show you how to deploy this Lambda to AWS.

Deploying a Containerized Lambda Application in AWS

Now that you can build and run the Lambda application locally, you can deploy it in AWS. You will need to create an IAM account role and procedurally add IAM policies that match the required operations for this deployment. By the end of this step, you will have both parts of your access key credentials: an access key ID and a secret access key.

Start by loading the AWS profile:

❯ aws configure
AWS Access Key ID \[****************CTVL\]:
AWS Secret Access Key \[****************ElMF\]:

Next, you want to create a new Elastic Container Registry (ECR) repository to store the Lambda image. You will need to assign the following IAM policy to the role that lets you list and create repositories:

{
  "Version": "2012-10-17",
  "Statement": \[
      {
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": \[
              "ecr:CreateRepository",
              "ecr:DescribeImages",
              "ecr:DescribeRepositories"
          ],
          "Resource": "*"
  }
 ]
}

After updating, you can use the following command to create a new repository:

❯ aws ecr create-repository --repository-name hello-world --image-tag-mutability IMMUTABLE
--image-scanning-configuration
scanOnPush=true

You’ll need to be able to log in and push images to the Docker repository using the credentials that you get from that registry. Now, assign the following policy to the account:

{
 "Version": "2012-10-17",
 "Statement": \[
  {
   "Sid": "VisualEditor0",
   "Effect": "Allow",
   "Action": \[
    "ecr:GetDownloadUrlForLayer",
    "ecr:BatchGetImage",
    "ecr:CompleteLayerUpload",
    "ecr:UploadLayerPart",
    "ecr:InitiateLayerUpload",
    "ecr:BatchCheckLayerAvailability",
    "ecr:PutImage"
   ],
   "Resource":
"arn:aws:ecr:<REGION>:<ACCOUNT_ID>:repository/hello-world"
  },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "ecr:GetAuthorizationToken",
            "Resource": "*"
        }
    ]
}

This will allow you to issue the following command that logs in to the registry:

❯ aws ecr get-login-password --region eu-west-1 | docker login
--username AWS --password-stdin
<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com
Login Succeeded

You just need to substitute the ACCOUNT_ID parameter with your account number. After that, you can push the image to the registry:

❯ docker tag hello-world
<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/hello-world

❯ docker push <ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/hello-world

Then, you will be able to inspect the image listed in the ECR repository:

Figure 1 - The Image Listed in ECR

Once the image is available there, you can create a new containerized Lambda application. You can do that from the UI by clicking on the Create Function button, selecting the Container Image option, and filling in the form:

Figure 2 - Creating a New Containerized Lambda

You have the option to create an execution role for this function. This will simply create a new role with the following policies for uploading logs to CloudWatch:

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Effect": "Allow",
   "Action": "logs:CreateLogGroup",
   "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:*"
  },
  {
   "Effect": "Allow",
   "Action": [
    "logs:CreateLogStream",
    "logs:PutLogEvents"
   ],
   "Resource": [

"arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:/aws/lambda/hello-world:
*"
   ]
  }
 ]
}

If you want to issue the following action from the command line, you can do so by using the AWS CLI:

❯ aws lambda create-function --region <REGION> --function-name
hello-world \
 --package-type Image \
 --code
ImageUri=<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/hello-world:lates
t \
 --role
arn:aws:iam::<ACCOUNT_ID>:role/service-role/hello-world-role

You can test this function by selecting it, then clicking on the Test tab to run it with some demo parameters:

Figure 3 - Inspecting Test Results

This function is not connected to any event or trigger, so you might want to use the UI to add configuration parameters for those as well.

Final Thoughts

In this article, we outlined the benefits of containerized Lambdas and showed you how to deploy a Python Lambda application in AWS. Containerized Lambda applications offer a more flexible approach and give you more control over packaging governance and security. In addition, developers can reuse standard company images that contain production-ready configurations and defaults without being held back by the restrictions of non-containerized Lambda runtimes.

However, adopting containers for production environments like Lambdas calls for different security considerations. In practice, it’s been shown that containers can conceal exploits just waiting to be uncovered. With the help of container slimming and security tools like DockerSlim, however, this risk can be mitigated with a greater chance of success.

Embarking on a New Journey

Farewell, Slim — Transitioning to a new and larger mission!

We're excited to share some big news from Slim.AI. We're taking a bold new direction, focusing all our energy on software supply chain security, now under our new name root.io. To meet this opportunity head-on, we’re building a solution focused on transparency, trust, and collaboration between software producers and consumers.

When we started Slim.AI, our goal was to help developers make secure containers. But as we dug deeper with our early adopters and key customers, we realized a bigger challenge exists within software supply chain security ​​— namely, fostering collaboration and transparency between software producers and consumers. The positive feedback and strong demand we've seen from our early customers made it crystal clear: This is where we need to focus.

This new opportunity demands a company and brand that meet the moment. To that end, we’re momentarily stepping back into stealth mode, only to emerge with a vibrant new identity, and a groundbreaking product very soon at root.io. Over the next few months, we'll be laser-focused on working with design partners and building up the product, making sure we're right on the mark with what our customers need.

Stay informed and up-to-date with our latest developments at root.io. Discover the details about the end of life for Slim services, effective March 31, 2024, by clicking here.

Embarking on a New Journey

Farewell, Slim — Transitioning to a new and larger mission!

We're excited to share some big news from Slim.AI. We're taking a bold new direction, focusing all our energy on software supply chain security, now under our new name root.io. To meet this opportunity head-on, we’re building a solution focused on transparency, trust, and collaboration between software producers and consumers.

When we started Slim.AI, our goal was to help developers make secure containers. But as we dug deeper with our early adopters and key customers, we realized a bigger challenge exists within software supply chain security ​​— namely, fostering collaboration and transparency between software producers and consumers. The positive feedback and strong demand we've seen from our early customers made it crystal clear: This is where we need to focus.

This new opportunity demands a company and brand that meet the moment. To that end, we’re momentarily stepping back into stealth mode, only to emerge with a vibrant new identity, and a groundbreaking product very soon at root.io. Over the next few months, we'll be laser-focused on working with design partners and building up the product, making sure we're right on the mark with what our customers need.

Stay informed and up-to-date with our latest developments at root.io. Discover the details about the end of life for Slim services, effective March 31, 2024, by clicking here.