Getting Started with Multi-Container Apps
Up your container game with Docker Compose
You can find images to use in your app and check important details about them via the Slim Developer Portal (opens new window).
After setting up your first single container, you’ll likely see anything and everything as a candidate for containerization. That’s usually a good thought, but given that a container is isolated from the outside world by default, how can you structure a solution that will allow you to have several components working together? You could just put everything inside one container, but that would violate the single-responsibility principle (opens new window), among other ideals.
Another option would be to have each component live in its own container and then fit those containers together like building blocks to form your solution. One of the many advantages of a multi-container, “microservices” approach is that it provides a centralized and simple configuration that controls all of your components in one place and can be started and stopped with a single command. In this article, we will show you how to build a simple solution with a data store, a data API layer, and a client that will have access to the data through the API.
# A Sample Multi-Container Application
Let’s start by defining the scope of each component. One of the most common templates for web applications includes a data storage solution. In this case, a relational database like PostgreSQL (opens new window) is a good choice. Mozilla Kinto (opens new window) offers a simple, extensible data model that’s ready to use with PostgreSQL can serve as a backend. Finally, depending on your use case, you can build a frontend application like a CMS, a static site, or a low-code tool like n8n (opens new window), which lets you use Kinto’s REST API or a host of other services. The following diagram shows the general architecture of the latter:
As you can see, the API client container doesn't have direct access to the data storage container, and all of the communication is done through the data API container, which has access to both the backend and frontend networks.
General Rule: If two containers are on the same network, they can talk to each other. If they aren’t, they can’t. —Docker Documentation
But before we get into the details, you need to understand the basics of networking in multi-container solutions. According to the first rule stated in the Docker documentation, “if two containers are on the same network, they can talk to each other. If they aren’t, they can’t.”
# Docker Compose
You can start building multi-container applications directly by creating the resources you need with the Docker CLI. For this example, we’ll use Docker Compose, which is a small Python tool on the Docker engine that provides a super simple, yet powerful way to define resources and services with only a YAML text file. First, you need to install Docker Compose. Windows and MacOS users can install Docker Desktop or follow the alternative installation options (opens new window); otherwise, you can install it from your terminal using pip or the shell script for the current stable release:
$ sudo curl -L "[https://github.com/docker/compose/releases/download/1.29.2/docker-compose-](https://github.com/docker/compose/releases/download/1.29.2/docker-compose- "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-")$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Then, create a docker-compose.yml file with the following contents:
version: '3'
networks:
backend:
name: backend_NW
frontend:
name: frontend_NW
services:
db:
image: postgres
networks:
- backend
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
This short file sets the syntax version for docker-compose, creates two standard networks (backend
and frontend
), and then defines a container named db
which attaches the backend network to the latest PostgreSQL image. Notice that the default port for PostgreSQL (5432
) is not exposed to the host, so you will not have direct access to this container. Also, the environment variables POSTGRES_USER
and POSTGRES_PASSWORD
are fixed, and you should use an environment variable file or a secrets manager to keep them secure. For this example, we’ll keep it simple and launch this configuration using Docker Compose in the command line:
$ docker-compose up -d
It’s not very useful at this point, so you can just verify that everything is ok and then shut the container down as follows:
$ docker-compose down
# Data API
To make the data storage useful, let’s connect the data API as a new container in the same docker-compose.yml
file and add the following services
(extracted from Kinto’s documentation):
cache:
networks:
- backend
image: library/memcached
api:
image: kinto/kinto-server
links:
- db
- cache
ports:
- "8888:8888"
networks:
- backend
- frontend
environment:
KINTO_CACHE_BACKEND: kinto.core.cache.memcached
KINTO_CACHE_HOSTS: cache:11211 cache:11212
KINTO_STORAGE_BACKEND: kinto.core.storage.postgresql
KINTO_STORAGE_URL: postgresql://postgres:postgres@db/postgres
KINTO_PERMISSION_BACKEND: kinto.core.permission.postgresql
KINTO_PERMISSION_URL: postgresql://postgres:postgres@db/postgres
The first container is an instance of Memcached that is wired to the backend network. The api container uses the latest Kinto image and links it to the db
and cache
containers. Notice that the api container is wired to both the backend
and frontend
networks and that it exposes port 8888
to the host.
# Testing Data
To make sure that there is connectivity between the data API and the data storage, we’ll launch the containers again and create a user for Kinto. If you like GUIs, you can use Postman. If you prefer the terminal, you can use curl or any other HTTP client. In this case, we’ll use httpie (opens new window).
$ echo '{"data": {"password": "s3cr3t"}}' | http PUT http://localhost:8888/v1/accounts/bob -v
In the image above, we created a user named bob with a toy password. The response from the data API shows that the containers can talk to each other:
# Consuming the Data
Finally, we will set up a new container wired to the frontend network that uses the latest n8n image and exposes port 5678 for an end-user interface. We’ll stop the containers from the command line with docker-compose down
, and then we’ll add the following lines to the docker-compose.yml file:
n8n:
image: n8nio/n8n
networks:
- frontend
links:
- api
ports:
- "5678:5678"
Once you launch the containers, you will be able to access the n8n web interface though http://localhost:5678
and create a simple workflow. Since the containers that we set up are stateless (we didn’t define any volumes in the docker-compose.yml file), we will have to create Kinto’s again like we did before.
# A Simple Data Workflow
N8n is a low-code visual tool with many integrations. For the sake of simplicity, we’ll use the REST client. First, we’ll define a POST
to Kinto’s API to insert a record. Then, if it’s successful, we’ll GET
all of the stored data:
Both operations take advantage of the docker-compose network configuration. Notice that the host defined is the name of the container (api
) with the port exposed. You can configure all important aspects of the network through the docker-compose.yml
file, including the use of a custom network adapter or any other special requirements.
Note: You'll have to create Kinto credentials for n8n's Basic Auth to work. Use the username ("bob") and password ("s3cr3t") that you used to create the Kinto above.
You can also create your own client in your local computer, since the data API is exposed to you at port 8888. This way, you can create a complex solution in a simple manner. If you don’t have experience with n8n, you can just import the following JSON file that defines the entire workflow:
{
"name": "My workflow",
"nodes": [
{
"parameters": {},
"name": "Start",
"type": "n8n-nodes-base.start",
"typeVersion": 1,
"position": [
250,
300
]
},
{
"parameters": {
"authentication": "basicAuth",
"requestMethod": "POST",
"url": "=http://api:8888/v1/buckets/default/collections/tasks/records",
"jsonParameters": true,
"options": {},
"bodyParametersJson": "{\"data\": {\"description\": \"Check your containers with Slim.ai\", \"status\": \"todo\"}}"
},
"name": "POST new record",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 1,
"position": [
450,
300
],
"credentials": {
"httpBasicAuth": "KintoAuth"
}
},
{
"parameters": {
"authentication": "basicAuth",
"url": "http://api:8888/v1/buckets/default/collections/tasks/records",
"options": {}
},
"name": "GET all records",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 1,
"position": [
650,
300
],
"credentials": {
"httpBasicAuth": "KintoAuth"
}
}
],
"connections": {
"Start": {
"main": [
[
{
"node": "POST new record",
"type": "main",
"index": 0
}
]
]
},
"POST new record": {
"main": [
[
{
"node": "GET all records",
"type": "main",
"index": 0
}
]
]
}
},
"active": false,
"settings": {}
}
Once it's working, you should be able to use n8n to POST data to the Kinto API and GET a response from the API.
Finally, the full docker-compose.yml
file should look like this:
version: '3'
networks:
backend:
name: backend_NW
frontend:
name: frontend_NW
services:
db:
image: postgres
networks:
- backend
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
cache:
networks:
- backend
image: library/memcached
api:
image: kinto/kinto-server
links:
- db
- cache
ports:
- "8888:8888"
networks:
- backend
- frontend
environment:
KINTO_CACHE_BACKEND: kinto.core.cache.memcached
KINTO_CACHE_HOSTS: cache:11211 cache:11212
KINTO_STORAGE_BACKEND: kinto.core.storage.postgresql
KINTO_STORAGE_URL: postgresql://postgres:postgres@db/postgres
KINTO_PERMISSION_BACKEND: kinto.core.permission.postgresql
KINTO_PERMISSION_URL: postgresql://postgres:postgres@db/postgres
n8n:
image: n8nio/n8n
networks:
- frontend
links:
- api
ports:
- "5678:5678"
# Conclusion
Containers offer many advantages that help you create composable and scalable applications, including isolation and easy connectivity. However, you have to use them in a logical and secure way, and you should learn about the common ways of developing cloud-ready applications before you start.
In this article, we taught you how to use Docker Compose to create a simple web application with three layers as an example of a multi-container solution.
As you build multi-container apps, the Slim Developer Platform (opens new window) makes a great starting point for vetting container images. Add frequently used container images to your Favorites (opens new window) for easy access.
# About the Author
Nicolas Bohorquez (@nickmancol (opens new window)) is a Data Architect at Merqueo. He has a Master’s Degree in Data Science for Complex Economic Systems and a Major in Software Engineering. Previously, Nicolas has been part of development teams in a handful of startups, and has founded three companies in the Americas. He is passionate about the modeling of complexity and the use of data science to improve the world.