Getting Started with Docker Running Flask, RedisDB, and NGINX
Docker is a trending term nowadays. The concept is called container, which provides isolation for different applications and makes possible for the application to be shipped and run on a diversity of platforms.
In this tutorial, the basic concepts of Docker will be covered, some basic commands introduced, and finally a tutorial on how to deploy a Flask app connected to a Redis database and served on NGINX with docker network will be covered.
Content
- Introducing Docker & Setup
- Basic Commands
- Hands On Time: A Flask Project on Docker
Introducing Docker
The Purpose of Containers
For biginners, you can think of container as a virtual machine, or even just a machine. The underlying infrastructure and mechanism is of course not the same, but you will know the purpose of using containers: running applications independently.
For developers, you often bump into the problem when you want to ship a single application to a different platform -- everything regarding the environment might need to be reconfigured again, which means another round of pain. With docker, all the dependencies are packed together with the code into a single container, which you can just lift and go.
Unlike bulky VMs, which include the entire OS kernel, containers are more lightweight and require less resource.
Setup
Glossary
- DockerHub
- A place for sharing images. Share your image, or pull existing images from here.
- Image
- A package including code, libraries, environment variables, and config files that can be run. Think of it as a set of configurations for a single environment. Hence images can be created, downloaded, shared, etc. All it waits for is to be executed, which becomes a container.
- Container
- A running instance of an image. Images are configurations residing in storage, and containers bring them into memory. Think of image as a certain static environment configuration, and containers are created once you load and run the environment; there can be many container instances running a certain image, but one image defines only one environment.
- Service
- An image for a microservice in the context of some larger applications which you with to run in distributed environment. You can scale services by starting a set of replicated containers. Docker compose is used to define a set of services.
- Volume
- A directory providing persistent and sharable data. With out volumes, data will be destroyed once the container is removed.
- Dockerfile
- An environment configuration file for starting your container. The file is composed of a set of instructions related to setting up the environment, such as move this and that to docker, make this and that port accessible, run this and that commands when starting, etc.
- Docker Network
- A mechanism for a cluster of containers to communicate with each other.
For a single project, it may contain several containers, one for web app, one for database, one for proxy server, etc.
Docker network provides a way for them to communicate with each other, while different docker networks remain isolated. More conveniently, while the IP address of each container is dynamic, the name of it in a network is static, hence provide a way to access, for example, container
example
with port8080
published, viahttp://example:8080
.
- A mechanism for a cluster of containers to communicate with each other.
For a single project, it may contain several containers, one for web app, one for database, one for proxy server, etc.
Docker network provides a way for them to communicate with each other, while different docker networks remain isolated. More conveniently, while the IP address of each container is dynamic, the name of it in a network is static, hence provide a way to access, for example, container
- Docker Compose
- A tool for defining and running a cluster of containers.
For the single project consisting of serveral containers, you may have to
docker run
them individually to start the single service. Docker compose lets you create and run your services with a single command. Definitions are written indocker-compose.yml
.
- A tool for defining and running a cluster of containers.
For the single project consisting of serveral containers, you may have to
Don't Mix Up...
ENTRYPOINT
v.s.CMD
- Both somehow specify which commands to run when started.
The default
ENTRYPOINT
, orentrypoint
indocker-compose.yml
, is/bin/sh -c
; the defaultCMD
isbash
. Consider the commanddocker run -it some-image /bin/bash
; everything aftersome-image
is theCMD
(in this case/bin/bash
). Running this command will runENTRYPOINT + CMD
, i.e./bin/sh -c /bin/bash
. E.g. if you specify theENTRYPOINT
asls
andCMD
as.
, the full command looks likedocker run --entrypoint="ls" some-image .
.
- Both somehow specify which commands to run when started.
The default
EXPOSE
v.s.-p
EXPOSE
, orexpose
indocker-compose.yml
, is for inter-container communications, e.g. using docker network; the port exposed won't be accessible to outside of the docker.-p
, orports
indocker-compose.yml
, publishes the port to the world, including all other containers.
Basic Commands
-
# List the current images you have and their details docker images # Download/upload an image from registry docker pull <image-name>[:<tag>] * docker pull nginx:latest * docker pull someuser/his-image docker push <image-name>[:<tag>] * docker push me/my-image # Remove images docker rmi <image-name|image-id|image-tag> # Remove all images docker rmi $(docker images -q) # Create image using Dockerfile under specified path docker build -t <image-name>[:<tag>] <path-to-directory-containing-dockerfile> * docker build -t me/my-image .
-
# List all running containers docker ps # Show container details docker inspect <container-name|container-id> # Run an image docker run <image-name> # Run an image with container name assigned docker run --name <container-name> <image-name> # Run an image in interative mode, interact with the bash shell created in the container docker run -it <image-name> /bin/bash # Run an image in detached mode, i.e. in background docker run -d <image-name> # Automatically remove the container when it exits docker run --rm <image-name> # Run an image on published port, mapping the port exposed by the container to the host port on my machine docker run -p <host-port>:<container-exposed-port> <image-name> * docker run -p 80:8080 nginx # Run an image with volume specified, sharing the directory in the user's path to the container's path docker run -v <user-path>:<container-path> <image-name> * docker run -v /etc/nginx:/etc/nginx nginx # Stop a container docker stop <container-name> # Remove a container docker rm <container-name|container-id> # Remove all containers docker rm $(docker ps -a -q)
-
# List all networks docker network ls # Create a network docker network create <network-name> # Connect a network to a container docker network connect <network-name> <container-name> # Or, run a container with network specified docker run --net <network-name> <image-name> # Show network details docker network inspect <network-name> # Remove a network docker network rm <network-name> # Disconnet a container from a network docker network disconnect <network-name> <container-name>
-
# (Re)create and run the service docker-compose up # Remove stopped services docker-compose rm
Hands On Time: A Flask Project on Docker
In this tutorial, we will create a network first so that containers can communicate within this network. Then we create and test the 3 containers, flask app, redis db, and nginx server, one by one. Finally, we demonstrate how to use docker compose to start the 3 services all at once.
Init Project
Create project named
example
with the structure below. Different services are seperated into different folders, each running a container (or serveral containers, if you want to scale).. ├── README.md └── src ├── docker-compose.yml # docker compose configuration ├── flaskapp # Service 1 │ ├── Dockerfile # image configuration │ ├── __init__.py │ ├── example │ │ ├── __init__.py │ │ ├── app.py # flask app entry │ │ ├── db.py # APIs to redis db │ │ └── wsgi.py # WSGI server entry │ ├── requirements.txt # dependency information (production) │ └── setup.py # dependency information (development) ├── nginx # Service 2 │ ├── Dockerfile # image configuration │ ├── __init__.py │ └── nginx.conf # nginx server configuration └── redisdb # Service 3 ├── Dockerfile # image configuration ├── __init__.py └── redis.conf # redis server configuration
Create
virtualenv
for each container. Since onlyflaskapp
need a python environment, we only create this one.$ cd src/flaskapp $ virtualenv venv
Install Packages in
virtualenv
.$ cd src/flaskapp $ source venv/bin/activate (venv) $ pip3 install -e . # dev mode (venv) $ deactivate
Note that virtualenv
is optional. It can help you test your project in isolated python environment before you deploy it on docker.
Create Docker Network
Create a Docker network for communication between the 3 containers below. We will name the network example
, which is the same as our project name.
$ docker network create example
Test
$ docker network ls > NETWORK ID NAME DRIVER SCOPE > ... > abcdefghijkl example bridge local > ...
Flask App Container
The following assumes venv
in src/flaskapp
is activated.
Add Flask App & Gunicorn
See src/flaskapp/example/app.py
and src/flaskapp/example/wsgi.py
.
Test
(venv) $ cd src/flaskapp (venv) $ gunicorn --bind 0.0.0.0:8080 example.wsgi # Open browser and go to `localhost:8080`. You should see `Hello World!`.
Freeze dependencies into
requirements.txt
(venv) $ pip3 freeze | grep -v 'exampleflask' > requirements.txt # ignore dependency on itself
Deploy on Docker
See src/flaskapp/Dockerfile
. venv
is made ignored by adding it in .dockerignore
.
Build image with tag
yourusername/exampleflask
$ cd src/flaskapp $ docker build -t yourusername/exampleflask .
Run container on image
yourusername/exampleflask
with nameexampleflask
, publish port8080
$ docker run -d --rm -p 8080:8080 --name exampleflask yourusername/exampleflask
Test
- Open browser and go to
localhost:8080
. You should seeHello World!
.
- Open browser and go to
Redis DB Container
The following assumes venv
in src/flaskapp
is activated.
Add Redis DB to Flask App
See src/flaskapp/example/db.py
.
Test
Install redis-server on your local machine first for testing
# Start the server on default port `6397` $ redis-server # Start the flask app (venv) $ cd src/flaskapp (venv) $ gunicorn --bind 0.0.0.0:8080 example.wsgi # Open browser and go to `localhost:8080/<your-name>`. You should see `Hello <your-name>!`.
Deploy on Docker
See src/redisdb/Dockerfile
and src/redisdb/redis.conf
.
Build image with tag
yourusername/exampleredis
$ cd redisdb $ docker build -t yourusername/exampleredis .
Run container on image
yourusername/exampleredis
with nameexampleredis
, publish port6379
$ docker run -d --rm -p 6379:6379 --name exampleredis yourusername/exampleredis
Test
$ redis-cli > 127.0.0.1:6379> # This is wrong > not connected>
Stop the containers, now run
flaskapp
andredisdb
in docker networkexample
for communication$ docker stop exampleflask exampleredis # No need to publish port for redis, # as the port is `EXPOSE`d in `Dockerfile` to other containers in the same docker network $ docker run -d --rm --net example --name exampleredis yourusername/exampleredis $ docker run -d --rm -p 8080:8080 --net example --name exampleflask yourusername/exampleflask
Test
- Open browser and go to
localhost:8080/<your-name>
. You should seeHello <your-name>!
.
- Open browser and go to
Note that Dockerfile
is needed only when you want to use your customized redis server configuration written in redis.conf
.
If you don't need a customized configuration, you don't need to build a new image yourself and can simply use the base image of redis
:
$ docker run -d --rm --net example --name exampleredis redis redis-server
Then modify docker-compose.yml
accordingly.
...
redis:
image: redis
container_name: exampleredis
Note that bind 127.0.0.1
in the redis.conf
file SHOULD be changed into bind 0.0.0.0
or else other containers still cannot access the redis server.
NGINX Container
Setup an NGINX Server
For HTTP
requests, see src/nginx/nginx.conf.sample
and follow this tutorial.
For HTTPS
requests, see src/nginx/nginx-ssl.conf.sample
and follow this tutoiral. Make sure that you have used letsencrypt or other means to retrieve the certificate and keys.
Choose either of them, modify the <your-domain-name>
(and your.domain.name
for HTTPS
) in the *.sample
file, and name it nginx.conf
. For HTTPS, if you did not use letsencrypt
, also change the ssl_certificate
and ssl_certificate_key
to the corresponding paths.
Deploy on Docker
See src/nginx/Dockerfile
.
Build image with tag
yourusername/examplenginx
$ cd src/nginx $ docker build -t yourusername/examplenginx .
Run container on image
yourusername/examplenginx
with nameexamplenginx
, publish port80
(and443
forHTTPS
). (Note that -p 8080:8080 is not needed anymore in starting the flask app container, as we will not access this port directly from the browser anymore but instead access this nginx proxy server)# HTTP $ docker run -d --rm --net example -p 80:80 --name examplenginx yourusername/examplenginx # HTTPS, share the directory containing SSL certificate with -v $ docker run -d --rm --net example -p 80:80 -p 443:443 -v /etc/letsencrypt:/etc/letsencrypt --name examplenginx yourusername/examplenginx
Test
HTTP
- Open browser and go to
http://localhost
. You should seeHello World!
.
- Open browser and go to
HTTPS
- Open browser and go to
https://localhost
. You should seeHello World!
.
- Open browser and go to
Wrap up the Project with Docker Compose
After testing individual containers, you can wrap all the commands up into a single docker-compose.yml
file, and everything can be started in a single command. All the parameters passed in to the commands when you started the containers are now specified in docker-compose.yml
.
Docker network is not needed anymore, as docker compose creates a default network for all its services. But to build up a more complex network topology, you can create your custom networks in the docker-compose.yml
file as well.
Deploy with Docker Compose
See src/docker-compose.yml
.
Start docker compose
$ cd src $ docker-compose up
Test
HTTP
- Open browser and go to
http://localhost
. You should seeHello World!
.
- Open browser and go to
HTTPS
- Open browser and go to
https://localhost
. You should seeHello World!
.
- Open browser and go to
Debug Tips
Use
-it
to run containers in interactive mode so that you can test, view logs, curl other containers, etc. under the environment the app is run in$ docker run -it --rm -p 8080:8080 --net example --name exampleflask yourusername/exampleflask /bin/bash > root@abcdefghijkl:~# # try curl other containers in the same network $ root@abcdefghijkl:~# apt-get -qq update && apt-get -yqq install curl $ root@abcdefghijkl:~# curl <other-container>:<port> > ... # list networks $ root@abcdefghijkl:~# cat /etc/hosts > ...
Print the logs of a container
$ docker logs exampleflask > ...
List the running containers to ensure they didn't encounter errors
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES abcdefghijkl yourusername/exampleflask "gunicorn --bind 0..." some time ago Up some time 0.0.0.0:8080->8080/tcp exampleflask mnopqrstuvwx yourusername/exampleredis "docker-entrypoint..." some time ago Up some time 6379/tcp exampleredis
List information of the network to ensure the containers are run within
$ docker network inspect example > [ { "Name": "example", "Id": "...", "Created": "...", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, // ...other properties "Containers": { "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx": { "Name": "exampleredis", "EndpointID": "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy", "MacAddress": "aa:bb:cc:dd:ee:ff", "IPv4Address": "w.x.y.z/a", "IPv6Address": "" }, // ...other container info }, // ...other properties } ]