Blog

 

 

This is a multi-page article....

If you're reading this then it's safe to assume that I don't need to explain why it's important to secure your web service, we can just jump straight into the how. Unfortunately, a catch-all tutorial is simply not possible with the plethora of technologies within the modern developer's arsenal so I've decided upon a tech stack similar to ours at Think Engineer for this tutorial. So, a more specific title could be: How to obtain an SSL certificate, using Let's Encrypt, for a multi-container Docker web application running on Ubuntu 18.0.4 - for free!

Specifically, this tutorial is going to walk you through the exact steps necessary to gain SSL certification from Let's Encrypt for a web application running behind an Nginx reverse proxy. We'll deploy a simple multi-container Flask web application, deploy it to an Ubuntu server, point a domain name at the server and then, finally, generate and automatically renew an SSL certificate.

Firstly, the server. I'm using a Virtual Private Server (VPS) from Digital Ocean that runs Ubuntu 18.0.4. It's the lowest spec VPS offered by Digital Ocean (1 GB RAM with a single CPU/core and 25 GB of SSD storage) but it's more than capable of running a simple web application and, at the time of writing, only costs £5 a month. You'll need super user access to your VPS. Make a note of your server's IP address.

Now, if you literally just want to know how to generate a valid SSL certificate, click here. That link will take you straight to the appropriate section. The stuff preceding that is detailed background information regarding the architecture of the application so I recommend that you read it, but if you already have a working application, then you probably don't need to.

The Application

So now we need an application to deploy. We're going to deploy a simple RESTful API using Python and Flask; now this isn't the main focus of the article so this will just be a brief overview. As previously mentioned we're using Docker to containerise our application and there are lots of great blog posts explaining the advantages of containerisation over virtual machines so I won't delve into that here, anyway; here's the directory structure of the entire application (and here's a Git repository containing everything):

my_web_app/
    - rest_api/
        - rest_api/
            - rest_api/
                - init.py
                - routes.py
            - wsgi.py
        - rest_api_environment.yml
        - Dockerfile
    - nginx/
        - Dockerfile
        - my_web_app.conf
        - nginx.conf
    - docker_compose.yml

At the top level of the application directory we have a folder containing the API, another containing the Nginx configuration and a docker-compose.yml file. The API and Nginx folders will eventually become two separate docker containers configured using docker-compose; hence why it's a multi-container application. Let's walk through this from the top down, starting with the docker-compose configuration.

The contents of the docker-compose.yml file are as follows:

version: '3'

services:
  rest_api:
    restart: always
    build: ./rest_api
    expose:
      - 8000
    command: /opt/conda/envs/rest-api-env/bin/gunicorn -w 4 -b :8000 --chdir /src/rest_api/rest_api/ wsgi:app

  nginx:
    restart: always
    build: ./nginx
    volumes:
      - /etc/letsencrypt/:/etc/letsencrypt/
    ports: 
      - 80:80
      - 443:443
    depends_on:
      - rest_api 

As you can see, two services are specified. The first builds the API, exposes port 8000 internally and tells gunicorn, a popular Python Web Server Gateway Interface (WSGI) HTTP server, to: run 4 workers; listen on port 8000 and run our application. The second service handles all Nginx configuration and exposes ports 80 (HTTP) and 443 (HTTP over TSL/SSL). Both services are set to restart upon crashing to minimise downtime. As you can see, we also specify a Docker volume, this is where we will store our SSL certificate and private key files, make a note of this directory. When ready, we can use the docker-compose build command to build our application.


Now we know how we'll build our application, let's discuss the individual services, starting with the REST API. At the top level of the rest_api directory we have a folder containing the actual application, a Dockerfile and a Conda environment file. Conda is a language-agnostic package, dependency and environment manager. We're using it to handle the installation of Python and a number of dependencies:

name: rest-api-env

channels:
  - defaults
  - conda-forge
dependencies:
  - python=3.6
  - flask=1.0.2
  - gunicorn=19.9
  - pip

The channels specify where Conda should look to find packages we wish to install, and then actual dependencies are listed below. Specific versions are stated to avoid installing newer versions of dependencies with code-breaking changes. You'll notice that pip, the built-in Python package manager is installed; this is because a number of dependencies aren't available through Conda.

Next we have the Dockerfile:

FROM continuumio/miniconda3

COPY rest-api-env.yml /tmp/rest-api-env.yml
RUN conda env create -f /tmp/rest-api-env.yml

COPY rest_api /src/rest_api/rest_api/

RUN echo "source activate rest-api-env" > ~/.bashrc
ENV PATH /opt/conda/envs/rest-api-env/bin:\(PATH

RUN pip install flask-rebar==1.1.0
RUN pip install marshmallow==2.16.3

Firstly we start with a pre-defined image that allows us to use Conda. Then we copy our Conda environment file into a temporary directory and run the command to create a Conda environment using this file. After the environment has resolved we copy our application code into the intended directory and activate the newly created Conda environment. Lastly, within the environment, we install Flask-Rebar and Marshmallow, these Python packages form the basis of our REST API.

This leaves the application code. To clarify exactly where the application code lives, here's the directory structure solely containing the application:

rest_api/
    - rest_api/
        - init.py
        - routes.py
    - wsgi.py

Let's start with wsgi.py:

from rest_api import app

if name == "main":
    app.run(host='0.0.0.0', port=80, debug=True)

The file wsgi.py simply imports the app object from the rest_api module. The conditional statement is simply there if we wish to manually run the application, say for debugging purposes. The app object is used by gunicorn, specified in the docker-compose.yml file.

Then we have the two files contained in the final rest_api directory. The presence of an init.py file indicates that this is a Python package. This can, when not needed, be an empty file, however it is used here to create the app object used by gunicorn and make the application aware of all API endpoints registered by Flask-Rebar. Here is init.py:

from rest_api.routes import rebar
from flask import Flask

app = Flask(name)
rebar.init_app(app)

import rest_api

Lastly, we have routes.py:

from flask import current_app, request
from flask_rebar import Rebar, response


rebar = Rebar()
registry = rebar.create_handler_registry()


@registry.handles(
    rule='/',
    method='GET'
)
def get_index():
    return response(
        data={'message': 'Hello, Being!'},
        status_code=200
    )

Within routes.py I simply specify a simple GET request endpoint and add it to the Flask-Rebar registry; analogous to using @app.route in vanilla Flask.

So we've covered the container and environment configuration for the API as well as it's source code, let's move onto the Nginx container.


What is Nginx? Nginx is a web server which can also be used as, importantly for us, a reverse proxy. In layman's terms, Nginx will be our externally-facing web server that routes traffic from a certain server name or port to another server; consider the following use case: I have a single domain name (example.com), a single VPS and two APIs (api_a and api_b). Say I want to access api_a from api_a.example.com and access api_b from api_b.example.com, I can use Nginx to direct traffic from api_a.example.com to api_a and the same for api_b – simple!

The Nginx directory contains a Dockerfile, a general Nginx configuration file (nginx.conf) and configuration specific to this application (my_web_app.conf). Let's start with the Dockerfile:

FROM nginx:1.15.5

RUN rm /etc/nginx/nginx.conf

COPY nginx.conf /etc/nginx/

RUN rm /etc/nginx/conf.d/.conf

COPY my_web_app.conf /etc/nginx/conf.d/

We start with an Nginx base image that handles the installation of Nginx and all prerequisites and then we replace the main Nginx and application specific configuration files with our own. I won't talk in too much detail about nginx.conf but it's imperative that you include the following two lines, they are responsible for ensuring our application is served securely:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2; 
ssl_prefer_server_ciphers on;

The application specific Nginx configuration file is as follows:

server {
    listen 80;
    listen 443 ssl;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    server_name example.com;

    client_body_buffer_size 64k;

    location / {
        proxy_pass http://rest_api:8000;
        proxy_redirect off;
        proxy_set_header Host \)host;
        proxy_set_header X-Real-IP \(remote_addr;
        proxy_set_header X-Forwarded-For \)proxy_add_x_forwarded_for;
    }
}

Here we define a server object and tell it to listen on the ports that we exposed in the docker-compose.yml file. We specify a server_name that tells the Nginx server object exactly which requests to respond to (i.e. api_a.example.com from the previous example) and then where to forward the request to. Most importantly we specify the directory of our SSL private key and certificate, keep a note of this.

So that is the application. I know it seems like a lot, and it is, but much of the code is transferable between projects and it's a great basis for a stable and reliable multi-container application.


SSL Certification

Okay, so we're ready to generate an SSL certificate for our web application. There are a few prerequisites for this:

  1. A fully registered domain name, i.e. example.com.
  2. A DNS A record with example.com pointing to your VPS's public IP address.
  3. A DNS A record with www.example.com pointing to your VPS's public IP address.

So, firstly, we need to install a tool called Certbot. To do this, you need to SSH into your VPS and run the following commands:

sudo add-apt-repository ppa:certbot/certbot
sudo apt update
sudo apt install certbot

Certbot has to answer a cryptographic challenge provided by the Let's Encrypt API. It uses port 80 or 443 to accomplish this, so you need to allow this port in your firewall. If you're on a Digital Ocean Ubuntu 18.0.4 machine, then you need these commands:

sudo ufw allow 80
sudo ufw allow 443

Be sure now to kill any processes that are currently using these ports. For example, if you have your web application running. If you do not then Certbot will fail to bind to the appropriate port and the request for certification will fail. To generate a certificate we are going to use Certbot's standalone mode:

sudo certbot certonly --standalone --preferred-challenges http -d example.com

After being prompted to enter your email address and accept terms and conditions you should receive a message similar to the following:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/example.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/example.com/privkey.pem
   Your cert will expire on 2019-03-19. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew all* of your certificates, run
   "certbot renew"

You'll notice that the directories where the private key and certificate file have been stored are the same as the ones that we have already specified in our configuration files. Each certificate is valid for 90 days. Renewal can be handled automatically using cron. To start, open the cron file using crontab -e. In the cron file we can add a new line, replacing the filepaths with the filepath of your docker-compose.yml:

@weekly certbot renew --pre-hook "docker-compose -f path/to/docker-compose.yml down" --post-hook "docker-compose -f path/to/docker-compose.yml up -d"

After saving the cron file we now have a renewal check scheduled weekly. To do this check, we need access to ports 80 and 443; this means that we need to kill our application before the check and start it after the check. This is handled by the --pre-hook and -–post-hook arguments. If Certbot detects that an SSL certificate will expire within 30 days then it will be renewed.

Lastly, you can check to see if the renewal command will work correctly by using the following command:

sudo certbot renew –dry-run

Feel free to add the --pre-hook and -–post-hook arguments to the dry run of the renewal, however only do this if you already have your application on your VPS, otherwise you'll attempt to reference files that do not yet exist.


Deployment

Deploying a multi-container application can be a confusing and daunting task, especially when considering the multitude of technologies and tools targeting this issue. Quite frankly it's not even a separate post, it's a separate series of posts. We're going to use Git to ensure that the application on our VPS is up-to-date. I've got the demo code in a repository, feel free to clone this to your VPS and change the appropriate directory/domain names to suit.

Once you have the correct code and you're in the application's root directory, we have two final commands:

docker-compose build 
docker-compose up

Build will, as the name suggests, build the application. Up will run it. Once running you can close the SSH connection. If you want to do other stuff whilst connected, simply append the -d flag to the docker-compose up command to run the containers in detached mode (in the background).