How to obtain an SSL certificate for a containerised web application - for free!

If you're reading this then it's safe to assume that I don't need to explain why it's important to secure your web service, we can just jump straight into the how. Unfortunately, a catch-all tutorial is simply not possible with the plethora of technologies within the modern developer's arsenal so I've decided upon a tech stack similar to ours at Think Engineer for this tutorial. So, a more specific title could be: How to obtain an SSL certificate, using Let's Encrypt, for a multi-container Docker web application running on Ubuntu 18.0.4 - for free!

Specifically, this tutorial is going to walk you through the exact steps necessary to gain SSL certification from Let's Encrypt for a web application running behind an Nginx reverse proxy. We'll deploy a simple multi-container Flask web application, deploy it to an Ubuntu server, point a domain name at the server and then, finally, generate and automatically renew an SSL certificate.

Firstly, the server. I'm using a Virtual Private Server (VPS) from Digital Ocean that runs Ubuntu 18.0.4. It's the lowest spec VPS offered by Digital Ocean (1 GB RAM with a single CPU/core and 25 GB of SSD storage) but it's more than capable of running a simple web application and, at the time of writing, only costs £5 a month. You'll need super user access to your VPS. Make a note of your server's IP address.

Now, if you literally just want to know how to generate a valid SSL certificate, click here. That link will take you straight to the appropriate section. The stuff preceding that is detailed background information regarding the architecture of the application so I recommend that you read it, but if you already have a working application, then you probably don't need to.

The Application

So now we need an application to deploy. We're going to deploy a simple RESTful API using Python and Flask; now this isn't the main focus of the article so this will just be a brief overview. As previously mentioned we're using Docker to containerise our application and there are lots of great blog posts explaining the advantages of containerisation over virtual machines so I won't delve into that here, anyway; here's the directory structure of the entire application (and here's a Git repository containing everything):

    - rest_api/
        - rest_api/
            - rest_api/
        - rest_api_environment.yml
        - Dockerfile
    - nginx/
        - Dockerfile
        - my_web_app.conf
        - nginx.conf
    - docker_compose.yml

At the top level of the application directory we have a folder containing the API, another containing the Nginx configuration and a docker-compose.yml file. The API and Nginx folders will eventually become two separate docker containers configured using docker-compose; hence why it's a multi-container application. Let's walk through this from the top down, starting with the docker-compose configuration.

The contents of the docker-compose.yml file are as follows:

version: '3'

    restart: always
    build: ./rest_api
      - 8000
    command: /opt/conda/envs/rest-api-env/bin/gunicorn -w 4 -b :8000 --chdir /src/rest_api/rest_api/ wsgi:app

    restart: always
    build: ./nginx
      - /etc/letsencrypt/:/etc/letsencrypt/
      - 80:80
      - 443:443
      - rest_api 

As you can see, two services are specified. The first builds the API, exposes port 8000 internally and tells gunicorn, a popular Python Web Server Gateway Interface (WSGI) HTTP server, to: run 4 workers; listen on port 8000 and run our application. The second service handles all Nginx configuration and exposes ports 80 (HTTP) and 443 (HTTP over TSL/SSL). Both services are set to restart upon crashing to minimise downtime. As you can see, we also specify a Docker volume, this is where we will store our SSL certificate and private key files, make a note of this directory. When ready, we can use the docker-compose build command to build our application.