Containerizing Python Apps for Lambda

A tutorial on deploying AWS Lambda using containers, Python edition.
Jun 09, 2022

Photo by Guillaume Bolduc on Unsplash

AWS Lambda functions define the serverless architectural pattern, wherein you deploy single functions that encapsulate a single operational logic in a cloud environment. AWS Lambda enables this by creating a development sandbox environment (backed by S3) where developers can upload their function code and connect those functions with events or triggers.

Deploying Lambda functions using the default sandbox runtime isn’t perfect, however. There are certain limitations, like deployment size, library support, and supported application runtime versions. Thankfully, you can now deploy AWS Lambda using containers.

This technical article will explain the benefits of containerized Lambda applications and show you how to deploy images to AWS Lambda. We’ll conclude by pointing you to some helpful reference material as well as some next steps that will help you take things even further.

Let’s get started.

Advantages Over Traditional Lambda Applications

While traditional Lambda functions restrict your choice of application runtimes, containerized Lambdas avoid this limitation through the use of container technology.

In order to support node v18 (released in April of 2022), for example, developers can use the official node:18.1 image, which has been heavily used in development and production. With non-containerized Lambdas, this would not be possible without writing a custom Lambda runtime, since the highest supported version is v14. Likewise, they only support Python 3.9 and below, but the most current version of Python is 3.10.

In addition, while the deployment upload size was traditionally restricted to a maximum of 50MB, container images allow for sizes up to 10GB, which (although not recommended) allows for bigger deployment workloads. Of course, if you have containers that size that you need to deploy, it would make sense to slim them first using DockerSlim so that you can avoid paying more to move larger images across machines.

If your organization has already invested heavily in the container ecosystem, it is recommended that you switch to containerized Lambdas. That way, you can gain more fine-tuned control over packaging governance and security as well as better reusability when it comes to system components and existing base production images, which will come in handy for all kinds of workloads.

Next, we’ll show you how to set up a containerized Lambda application using Python. Then, we’ll walk you through the steps for deploying it to AWS.

Setting Up a Containerized Lambda Application in Python

Below, we will explain how to create a containerized Lambda application in Python. You could also use an alternative runtime like Node.js or Java.

First, there are a few prerequisites. If you have not already done so, you will have to:

When creating a containerized Lambda application in Python, you have a few options:

  • You can use the base Lambda image (which only supports certain versions of Python).
  • Or, you can use your own custom image. This method allows you to use a version of Python that isn’t supported by the base Lambda image, and it also enables you to add steps to the Dockerfile.

No matter which type of image you choose, you also have the option of either creating the containerized Lambda application from scratch or using the AWS Serverless Application Model (SAM), which brings some automation to the experience.

Using the Base Lambda Image

First, we’ll show you how to use the base Lambda image to set up a containerized Lambda application. Start by creating the following files in an empty folder:

import os
import os
import json

def handler(event, context):
    version = os.environ\['APP_VERSION'\]
    return {
        "statusCode": 200,
        "headers": {
            "Content-Type": "application/json"
        "body": json.dumps({
            "Version ": version

Then, create a Dockerfile with the following contents:




COPY requirements.txt
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"


CMD \[ "app.handler" \]

This image uses the base Lambda image for Python 3.9 (which is currently the latest version supported by AWS). It copies the application code to the LAMBDA_TASK_ROOT env path and installs the Lambda dependencies. It also exposes a new environment variable, APP_VERSION, and then starts the application pointing to the handler function.

You can test this application by building the image and running the container:

docker build -t hello-world:latest .docker run -p 9000:8080 hello-world:latest

Then, in a different shell terminal, you can use curl to send a request to the Lambda function:

curl -XPOST
"http://localhost:9000/2015-03-31/functions/function/invocations" -d

{"statusCode": 200, "headers": {"Content-Type": "application/json"},
"body": "{\\"Version \\": \\"1.0.0\\"}"}%

Note: The URL path contains something that looks like a date (2015-03-31), but it’s actually a version string.

Using a Custom Image

You can also use a different version of Python (including the most recent version) by creating a custom image. To do that, you want to use the following Dockerfile:


ARG FUNCTION_DIR="/function"
FROM python:3.10-buster as build-image
RUN apt-get update && \
    apt-get install -y \
    g++ \
    make \
    cmake \
    unzip \

RUN mkdir -p ${FUNCTION_DIR}

COPY requirements.txt .
RUN pip install \
    -r requirements.txt \\
    --target ${FUNCTION_DIR} \\

FROM python:3.10-buster


COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}


ENTRYPOINT \[ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler" ]

Here, we included several other steps to install Python 3.10 on Buster Debian and configure the FUNCTION_DIR, which will host the code and then use a multi-stage build to move the final function and run it under the AWS Lambda interface for Python.

It would be a great idea to test this locally before you run the image. To do so, you can install the AWS Lambda Runtime Interface Emulator (RIE) locally and mount it as a volume:

mkdir -p \~/.aws-lambda-rie && curl -Lo
~/.aws-lambda-rie/aws-lambda-rie \
latest/download/aws-lambda-rie \
&& chmod +x \~/.aws-lambda-rie/aws-lambda-rie

Then, build and run the custom image using the following commands:

docker build -t hello-world:latest .docker run -d -v \~/.aws-lambda-rie:/aws-lambda -p 9000:8080
--entrypoint /aws-lambda/aws-lambda-rie hello-world:latest
/usr/local/bin/python -m awslambdaric app.handler

Testing it should be a breeze:

curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d
{"statusCode": 200, "headers": {"Content-Type": "application/json"},
"body": "{\\"Version \\": \\"1.0.0\\"}"}%

Now we have the flexibility to bundle our own Python runtimes. Next, we’ll show you how to deploy this Lambda to AWS.

Deploying a Containerized Lambda Application in AWS

Now that you can build and run the Lambda application locally, you can deploy it in AWS. You will need to create an IAM account role and procedurally add IAM policies that match the required operations for this deployment. By the end of this step, you will have both parts of your access key credentials: an access key ID and a secret access key.

Start by loading the AWS profile:

❯ aws configure
AWS Access Key ID \[****************CTVL\]:
AWS Secret Access Key \[****************ElMF\]:

Next, you want to create a new Elastic Container Registry (ECR) repository to store the Lambda image. You will need to assign the following IAM policy to the role that lets you list and create repositories:

  "Version": "2012-10-17",
  "Statement": \[
          "Sid": "VisualEditor0",
          "Effect": "Allow",
          "Action": \[
          "Resource": "*"

After updating, you can use the following command to create a new repository:

❯ aws ecr create-repository --repository-name hello-world --image-tag-mutability IMMUTABLE

You’ll need to be able to log in and push images to the Docker repository using the credentials that you get from that registry. Now, assign the following policy to the account:

    "Version": "2012-10-17",
    "Statement": \[
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": \[
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "ecr:GetAuthorizationToken",
            "Resource": "*"

This will allow you to issue the following command that logs in to the registry:

❯ aws ecr get-login-password --region eu-west-1 | docker login
--username AWS --password-stdin
Login Succeeded

You just need to substitute the ACCOUNT_ID parameter with your account number. After that, you can push the image to the registry:

docker tag hello-world

❯ docker push <ACCOUNT_ID>.dkr.ecr.<REGION>

Then, you will be able to inspect the image listed in the ECR repository:

Figure 1 - The Image Listed in ECR

Once the image is available there, you can create a new containerized Lambda application. You can do that from the UI by clicking on the Create Function button, selecting the Container Image option, and filling in the form:

Figure 2 - Creating a New Containerized Lambda

You have the option to create an execution role for this function. This will simply create a new role with the following policies for uploading logs to CloudWatch:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:*"
            "Effect": "Allow",
            "Action": [
            "Resource": [


If you want to issue the following action from the command line, you can do so by using the AWS CLI:

❯ aws lambda create-function --region <REGION> --function-name
hello-world \
    --package-type Image \
t \

You can test this function by selecting it, then clicking on the Test tab to run it with some demo parameters:

Figure 3 - Inspecting Test Results

This function is not connected to any event or trigger, so you might want to use the UI to add configuration parameters for those as well.

Final Thoughts

In this article, we outlined the benefits of containerized Lambdas and showed you how to deploy a Python Lambda application in AWS. Containerized Lambda applications offer a more flexible approach and give you more control over packaging governance and security. In addition, developers can reuse standard company images that contain production-ready configurations and defaults without being held back by the restrictions of non-containerized Lambda runtimes.

However, adopting containers for production environments like Lambdas calls for different security considerations. In practice, it’s been shown that containers can conceal exploits just waiting to be uncovered. With the help of container slimming and security tools like DockerSlim, however, this risk can be mitigated with a greater chance of success.

Feel free to explore Slim.AI’s innovative container security solutions, including the Slim SaaS Early Access program that lets you analyze thousands of public container images or scan your own using their online panel.

Related Articles

Automatically reduce Docker container size using DockerSlim

REST Web Service example using Python/Flask

John Amaral


Container of the Week: Python & Flask

Our weekly breakdown of a popular container

5 Most Commonly Asked DockerSlim Questions

We enlisted DockerSlim expert and Slim.AI Developer Experience Engineer to dive into how container slimming works.

Primož Ajdišek

Technical Staff

5 Ways Slim Containers Save You Money

Do slim containers really save you money on your cloud bill? Are there cost advantages to smaller containers? Find out here.

Chris Tozzi

Automating DockerSlim in Your CICD Pipeline

Using GitHub Actions, you can refine container images automatically making them smaller, faster to load, and more secure by default – all without sacrificing any capabilities.

Nicolas Bohorquez


Building Apps Using Cloud Native Buildpacks

Getting started with this innovative technique

Vince Power


Building DockerSlim into a Jenkins Pipeline

A step by step tutorial on building DockerSlim into your CI/CD pipeline.

Clarifying the Complex: Meet Ivan Velichko, Container Dude at Slim.AI

Ivan recently joined the team at Slim.AI, and we sat down with him to learn more about the path that led him here.

Ivan Velichko

Container Dude

Container Insights: Dissecting the World's Most Popular Containers

Join Ayse Kaya in this series, as she creates her 2022 Container Report Chalk Full of Important Security Findings for Developers.

Ayse Kaya

Analytics & Strategy

What We Discovered Analyzing the Top 100 Public Container Images

Complexity abounds in modern development

Ayse Kaya

Analytics & Strategy

2022 Public Container Report

Vulnerabilities continue to increase and developers are struggling to keep up.

Ayse Kaya

Analytics & Strategy

Docker Containers for Your Raspberry Pi

Compact PCs need compact apps

Martin Wimpress


Explore and analyze a Docker container with DockerSlim X-Ray

Understanding container composition

Martin Wimpress


Five Proven Ways to Debug a Container

When Things Just Are Not Working

Theofanis Despoudis


Five Things You Should Never Ship to Production in a Container

Here is our take on five things to avoid when creating a container or shipping it to production.

Chris Tozzi

Increasing Your CI/CD Velocity with Slim Containers

We’ll explain what Slim Containers are, how they speed up the build process, and how they can improve the efficiency of your testing.

Mike Mackrory


Integrate Testing into Your Container Pipeline

A closer look at testing within container pipelines, CI/CD, software delivery, and containerization.

Faith Kilonzi

Software Engineer

Reducing Docker Image Size - Slimming vs Compressing

Know the difference

Pieter van Noordennen


Serverless Applications and Docker

How to Scale the Latest Trend in Infrastructure

Pieter van Noordennen


Slim.AI Docker Extension for Docker Desktop

How to access our Docker Extension and try it for yourself.

Josh Viney


Slimming a Rails Application with DockerSlim

Dissect a simple Rails application container using DockerSlim to analyze, optimize, and deploy your product more quickly.

Theofanis Despoudis


Where Do You Store Your Container Images?

Container Registry Options are Growing in Number and Complexity

Pieter van Noordennen


Using AppArmor and SecComp Profiles for Security Audits

Conduct better container security audits using tools like SecComp, NGINX, and Docker.

What’s in your container?

Why Docker Layers matter for container optimization

Pieter van Noordennen


Why Developers Shouldn't Have to Be Infrastructure Experts, Too

Simplifying processes required to containerize and deploy cloud-native apps.

Chris Tozzi

A New Workflow for Cloud Development

Leverage the benefits of containerization without the headaches & hassle

John Amaral


Why Don’t We Practice Container Best Practices?

Container best practices are easy to understand, hard to do

John Amaral


Better Security Audits with AppArmor and SecComp via DockerSlim

Combine the power of tools like SecComp, NGINX, and Docker.