In a previous post I described how to set up a perfect Python project with dependency management, formatting, linting and testing tools all set up and ready to go.
After writing your Python application the next stage is deploying and running it. Docker provides a excellent abstraction that guarantees that the environment for running the application is identical on every deployment and run, even when running on different hardware or infrastructure.
I will assume you already have Docker installed, if not you can follow the instructions here.
Let's jump in at the deep end. Here's the finished
Dockerfile that will build an image for running our Python application. It's not long, but there's a fair bit to unpack and explain, so here goes.
FROM python:3.7-slim as base # Setup env ENV LANG C.UTF-8 ENV LC_ALL C.UTF-8 ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONFAULTHANDLER 1 FROM base AS python-deps # Install pipenv and compilation dependencies RUN pip install pipenv RUN apt-get update && apt-get install -y --no-install-recommends gcc # Install python dependencies in /.venv COPY Pipfile . COPY Pipfile.lock . RUN PIPENV_VENV_IN_PROJECT=1 pipenv install --deploy FROM base AS runtime # Copy virtual env from python-deps stage COPY --from=python-deps /.venv /.venv ENV PATH="/.venv/bin:$PATH" # Create and switch to a new user RUN useradd --create-home appuser WORKDIR /home/appuser USER appuser # Install application into container COPY . . # Run the application ENTRYPOINT ["python", "-m", "http.server"] CMD ["--directory", "directory", "8000"]
Let's describe each of the sections:
First we specify the image that we are building our image on top of. This is an official image provided by the Docker organisation that has Python 3.7 installed and is slimmed down. We give this image the name
base and it will be used in two other build stages:
FROM python:3.7-slim AS base
Next we set environment variables that set the locale correctly, stop Python from generating
.pyc files, and enable Python tracebacks on segfaults:
# Setup env ENV LANG C.UTF-8 ENV LC_ALL C.UTF-8 ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONFAULTHANDLER 1
Next we start a new build stage using the
base image we've just created. We're going to install all our python dependencies in the
python-deps image, and later copy them into our
runtime image. This limits the objects in the
runtime image to only those needed to run the application:
FROM base AS python-deps
Now we install
pipenv which we are using for our application dependencies, and we install
gcc which is needed to compile several Python libraries (this may not be needed depending on the libraries you use):
# Install pipenv and compilation dependencies RUN pip install pipenv RUN apt-get update && apt-get install -y --no-install-recommends gcc
Next step is to install the dependencies in a new virtual environment. We set
PIPENV_VENV_IN_PROJECT so we know exactly where it will be located:
# Install python dependencies in /.venv COPY Pipfile . COPY Pipfile.lock . RUN PIPENV_VENV_IN_PROJECT=1 pipenv install --deploy
Now we are ready to build our
runtime image so we start a new build stage:
FROM base AS runtime
Copy in the virtual environment we made in the
python-deps image, but without anything we used to make it:
# Copy virtual env from python-deps stage COPY --from=python-deps /.venv /.venv ENV PATH="/.venv/bin:$PATH"
Running as the root user is a security risk, so we create a new user and use their home directory as the working directory:
# Create and switch to a new user RUN useradd --create-home appuser WORKDIR /home/appuser USER appuser
Now copy in the application code from our source repository:
# Install application into container COPY . .
Finally we set the
ENTRYPOINT which is the command that will be run when the image is run, and the
CMD which is the default arguments that will be added to the command:
# Run the application ENTRYPOINT ["python", "-m", "http.server"] CMD ["--directory", ".", "8000"]
By default Docker will copy in all the files in our project when it runs the
COPY . . command, which will lead to a lot of unneeded files bloating the image and potentially introducing security holes.
We can make a
.dockerignore file to exclude files and directories from the image. To avoid forgetting to update it every time we add a new file that should be excluded I prefer to ignore everything by default, and then add back in files that are actually needed:
# Ignore everything ** # ... except for dependencies and source !Pipfile !Pipfile.lock !my_package/
Now that we have defined our
.dockerignore we can build and run our application using Docker.
Building the image:
docker build . -t name:tag
This will build the
Dockerfile in the current directory and name and tag the image
Running the image
docker run --rm name:tag arg1 arg2
--rmwill delete the container once it has completed running
arg2will override the arguments that we provided in
CMD ["directory", ".", "8000"]and are optional.
Now that we have written our application and defined our Docker image, we want to be able to run our tests and build our image in continuous integration. Next time we will show how GitHub Actions makes this incredibly simple.