One place for hosting & domains

      Secure

      How To Secure MongoDB on Ubuntu 20.04


      An earlier version of this tutorial was written by Melissa Anderson.

      Introduction

      MongoDB, also known as Mongo, is an open-source document database used in many modern web applications. It is classified as a NoSQL database because it does not rely on a traditional table-based relational database structure. Instead, it uses JSON-like documents with dynamic schemas.

      MongoDB doesn’t have authentication enabled by default, meaning that any user with access to the server where the database is installed can add and delete data without restriction. In order to secure this vulnerability, this tutorial will walk you through creating an administrative user and enabling authentication. You’ll then test to confirm that only this administrative user has access to the database.

      Prerequisites

      To complete this tutorial, you will need the following:

      • A server running Ubuntu 20.04. This server should have a non-root administrative user and a firewall configured with UFW. Set this up by following our initial server setup guide for Ubuntu 20.04.
      • MongoDB installed on your server. This tutorial was validated using MongoDB version 4.4, though it should generally work for older versions of MongoDB as well. To install Mongo on your server, follow our tutorial on How To Install MongoDB on Ubuntu 20.04.

      Step 1 — Adding an Administrative User

      Since the release of version 3.0, the MongoDB daemon is configured to only accept connections from the local Unix socket, and it is not automatically open to the wider Internet. However, authentication is still disabled by default. This means that any users that have access to the server where MongoDB is installed also have complete access to the databases.

      As a first step to securing this vulnerability, you will create an administrative user. Later, you’ll enable authentication and connect as this administrative user to access the database.

      To add an administrative user, you must first connect to the Mongo shell. Because authentication is disabled you can do so with the mongo command, without any other options:

      There will be some output above the Mongo shell prompt. Because you haven’t yet enabled authentication, this will include a warning that access control isn’t enabled for the database and that read and write access to data and and the database’s configuration are unrestricted:

      Output

      MongoDB shell version v4.4.0 . . . 2020-06-09T13:26:51.391+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2020-06-09T13:26:51.391+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. . . . >

      These warnings will disappear after you enable authentication, but for now they mean anyone who can access your Ubuntu server could also take control over your database.

      To illustrate, run Mongo’s show dbs command:

      This command returns a list of every database on the server. However, when authentication is enabled, the list changes based on the Mongo user’s role, or what level of access it has to certain databases. Because authentication is disabled, though, it will return every database currently on the system without restrictions:

      Output

      admin 0.000GB config 0.000GB local 0.000GB

      In this example output, only the default databases appear. However, if you have any databases holding sensitive data on your system, any user could find them with this command.

      As part of mitigating this vulnerability, this step is focused on adding an administrative user. To do this, you must first connect to the admin database. This is where information about users, like their usernames, passwords, and roles, are stored:

      Output

      switched to db admin

      MongoDB comes installed with a number of JavaScript-based shell methods you can use to manage your database. One of these, the db.createUser method, is used to create new users on the database on which the method is run.

      Initiate the db.createUser method:

      This method requires you to specify a username and password for the user, as well as any roles you want the user to have. Recall that MongoDB stores its data in JSON-like documents. As such, when you create a new user, all you’re doing is creating a document to hold the appropriate user data as individual fields.

      As with objects in JSON, documents in MongoDB begin and end with curly braces ({ and }). To begin adding a user, enter an opening curly brace:

      Note: Mongo won’t register the db.createUser method as complete until you enter a closing parenthesis. Until you do, the prompt will change from a greater than sign (>) to an ellipsis (...).

      Next, enter a user: field, with your desired username as the value in double quotes followed by a comma. The following example specifies the username AdminSammy, but you can enter whatever username you like:

      Next, enter a pwd field with the passwordPrompt() method as its value. When you execute the db.createUser method, the passwordPrompt() method will provide a prompt for you to enter your password. This is more secure than the alternative, which is to type out your password in cleartext as you did for your username.

      Note: The passwordPrompt() method is only compatible with MongoDB versions 4.2 and newer. If you’re using an older version of Mongo, then you will have to write out your password in cleartext, similarly to how you wrote out your username:

      Be sure to follow this field with a comma as well:

      Then enter the roles you want your administrative user to have. Because you’re creating an administrative user, at a minimum you should grant them the userAdminAnyDatabase role over the admin database. This will allow the administrative user to create and modify new users and roles. Because the administrative user has this role in the admin database, this will also grant it superuser access to the entire cluster.

      In addition, the following example also grants the administrative user the readWriteAnyDatabase role. This grants the administrative user the ability to read and modify data on any database in the cluster except for the config and local databases, which are mostly for internal use:

      • roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]

      Following that, enter a closing brace to signify the end of the document:

      Then enter a closing parenthesis to close and execute the db.createUser method:

      All together, here’s what your db.createUser method should look like:

      > db.createUser(
      ... {
      ... user: "AdminSammy",
      ... pwd: passwordPrompt(),
      ... roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
      ... }
      ... )
      

      If each line’s syntax is correct, the method will execute properly and you’ll be prompted to enter a password:

      Output

      Enter password:

      Enter a strong password of your choosing. Then, you’ll receive a confirmation that the user was added:

      Output

      Successfully added user: { "user" : "AdminSammy", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" }, "readWriteAnyDatabase" ] }

      Following that, you can exit the MongoDB client:

      At this point, your user will be allowed to enter credentials. However, they will not be required to do so until you enable authentication and restart the MongoDB daemon.

      Step 2 — Enabling Authentication

      To enable authentication, you must edit mongod.conf, MongoDB’s configuration file. Once you enable it and restart the Mongo service, users will still be able to connect to the database without authenticating. However, they won’t be able to read or modify any data until they provide a correct username and password.

      Open the configuration file with your preferred text editor. Here, we’ll use nano:

      • sudo nano /etc/mongod.conf

      Scroll down to find the commented-out security section:

      /etc/mongod.conf

      . . .
      #security:
      
      #operationProfiling:
      
      . . .
      

      Uncomment this line by removing the pound sign (#):

      /etc/mongod.conf

      . . .
      security:
      
      #operationProfiling:
      
      . . .
      

      Then add the authorization parameter and set it to "enabled". When you’re done, the lines should look like this:

      /etc/mongod.conf

      . . .
      security:
        authorization: "enabled"
      . . . 
      

      Note that the security: line has no spaces at the beginning, while the authorization: line is indented with two spaces.

      After adding these lines, save and close the file. If you used nano to open the file, do so by pressing CTRL + X, Y, then ENTER.

      Then restart the daemon to put these new changes into effect:

      • sudo systemctl restart mongod

      Next, check the service’s status to make sure that it restarted correctly:

      • sudo systemctl status mongod

      If the restart command was successful, you’ll receive output that indicates that the mongod service is active and was recently started:

      Output

      ● mongod.service - MongoDB Database Server Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-06-09 22:06:20 UTC; 7s ago Docs: https://docs.mongodb.org/manual Main PID: 15370 (mongod) Memory: 170.1M CGroup: /system.slice/mongod.service └─15370 /usr/bin/mongod --config /etc/mongod.conf Jun 09 22:06:20 your_host systemd[1]: Started MongoDB Database Server.

      Having verified the daemon is back up and running, you can test that the authentication setting you added works as expected.

      Step 3 — Testing Authentication Settings

      To begin testing that the authentication requirements you added in the previous step are working correctly, start by connecting without specifying any credentials to verify that your actions are indeed restricted:

      Now that you’ve enabled authentication, none of the warnings you encountered previously will appear:

      Output

      MongoDB shell version v4.4.0 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("5d50ed96-f7e1-493a-b4da-076067b2d898") } MongoDB server version: 4.4.0 >

      Confirm whether your access is restricted by running the show dbs command again:

      Recall from Step 1 that there are at least a few default databases on your server. However, in this case the command won’t have any output because you haven’t authenticated as a privileged user.

      Because this command doesn’t return any information, it’s safe to say the authentication setting is working as expected. You also won’t be able to create users or perform other privileged tasks without first authenticating.

      Go ahead and exit the MongoDB shell:

      Note: Instead of running the following exit command as you did previously in Step 1, an alternative way to close the shell is to just press CTRL + C.

      Next, make sure that your administrative user is able to authenticate properly by running the following mongo command to connect as this user. This command includes the -u flag, which precedes the name of the user you want to connect as. Be sure to replace AdminSammy with your own administrative user’s username. It also includes the -p flag, which will prompt you for the user’s password, and specifies admin as the authentication database where the specified username was created:

      • mongo -u AdminSammy -p --authenticationDatabase admin

      Enter the user’s password when prompted, and then you’ll be dropped into the shell. Once there, try issuing the show dbs command again:

      This time, because you’ve authenticated properly, the command will successfully return a list of all the databases currently on the server:

      Output

      admin 0.000GB config 0.000GB local 0.000GB

      This confirms that authentication was enabled successfully.

      Conclusion

      By completing this guide, you’ve set up an administrative MongoDB user which you can employ to create and modify new users and roles, and otherwise manage your MongoDB instance. You also configured your MongoDB instance to require that users authenticate with a valid username and password before they can interact with any data.

      For more information on how to manage MongoDB users, check out the official documentation on the subject. You may also be interested in learning more about how authentication works on MongoDB.

      Also, if you plan to interact with your MongoDB instance remotely, you can follow our guide on How To Configure Remote Access for MongoDB on Ubuntu 20.04.



      Source link

      How To Set Up and Secure an etcd Cluster with Ansible on Ubuntu 18.04


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      etcd is a distributed key-value store relied on by many platforms and tools, including Kubernetes, Vulcand, and Doorman. Within Kubernetes, etcd is used as a global configuration store that stores the state of the cluster. Knowing how to administer etcd is essential to administering a Kubernetes cluster. Whilst there are many managed Kubernetes offerings, also known as Kubernetes-as-a-Service, that remove this administrative burden away from you, many companies still choose to run self-managed Kubernetes clusters on-premises because of the flexibility it brings.

      The first half of this article will guide you through setting up a 3-node etcd cluster on Ubuntu 18.04 servers. The second half will focus on securing the cluster using Transport Layer Security, or TLS. To run each setup in an automated manner, we will use Ansible throughout. Ansible is a configuration management tool similar to Puppet, Chef, and SaltStack; it allows us to define each setup step in a declarative manner, inside files called playbooks.

      At the end of this tutorial, you will have a secure 3-node etcd cluster running on your servers. You will also have an Ansible playbook that allows you to repeatedly and consistently recreate the same setup on a fresh set of servers.

      Prerequisites

      Before you begin this guide you’ll need the following:

      • Python, pip, and the pyOpenSSL package installed on your local machine. To learn how to install Python3, pip, and Python packages, refer to How To Install Python 3 and Set Up a Local Programming Environment on Ubuntu 18.04.

      • Three Ubuntu 18.04 servers on the same local network, with at least 2GB of RAM and root SSH access. You should also configure the servers to have the hostnames etcd1, etcd2, and etcd3. The steps outlined in this article would work on any generic server, not necessarily DigitalOcean Droplets. However, if you’d like to host your servers on DigitalOcean, you can follow the How to Create a Droplet from the DigitalOcean Control Panel guide to fulfil this requirement. Note that you must enable the Private Networking option when creating your Droplet. To enable private networking on existing Droplets, refer to How to Enable Private Networking on Droplets.

      Warning: Since the purpose of this article is to provide an introduction to setting up an etcd cluster on a private network, the three Ubuntu 18.04 servers in this setup were not tested with a firewall and are accessed as the root user. In a production setup, any node exposed to the public internet would require a firewall and a sudo user to adhere to security best practices. For more information, check out the Initial Server Setup with Ubuntu 18.04 tutorial.

      Step 1 — Configuring Ansible for the Control Node

      Ansible is a tool used to manage servers. The servers Ansible is managing are called the managed nodes, and the machine that is running Ansible is called the control node. Ansible works by using the SSH keys on the control node to gain access to the managed nodes. Once an SSH session is established, Ansible will run a set of scripts to provision and configure the managed nodes. In this step, we will test that we are able to use Ansible to connect to the managed nodes and run the hostname command.

      A typical day for a system administrator may involve managing different sets of nodes. For instance, you may use Ansible to provision some new servers, but later on use it to reconfigure another set of servers. To allow administrators to better organize the set of managed nodes, Ansible provides the concept of host inventory (or inventory for short). You can define every node that you wish to manage with Ansible inside an inventory file, and organize them into groups. Then, when running the ansible and ansible-playbook commands, you can specify which hosts or groups the command applies to.

      By default, Ansible reads the inventory file from /etc/ansible/hosts; however, we can specify a different inventory file by using the --inventory flag (or -i for short).

      To get started, create a new directory on your local machine (the control node) to house all the files for this tutorial:

      • mkdir -p $HOME/playground/etcd-ansible

      Then, enter into the directory you just created:

      • cd $HOME/playground/etcd-ansible

      Inside the directory, create and open a blank inventory file named hosts using your editor:

      • nano $HOME/playground/etcd-ansible/hosts

      Inside the hosts file, list out each of your managed nodes in the following format, replacing the public IP addresses highlighted with the actual public IP addresses of your servers:

      ~/playground/etcd-ansible/hosts

      [etcd]
      etcd1 ansible_host=etcd1_public_ip  ansible_user=root
      etcd2 ansible_host=etcd2_public_ip  ansible_user=root
      etcd3 ansible_host=etcd3_public_ip  ansible_user=root
      

      The [etcd] line defines a group called etcd. Under the group definition, we list all our managed nodes. Each line begins with an alias (e.g., etcd1), which allows us to refer to each host using an easy-to-remember name instead of a long IP address. The ansible_host and ansible_user are Ansible variables. In this case, they are used to provide Ansible with the public IP addresses and SSH usernames to use when connecting via SSH.

      To ensure Ansible is able to connect with our managed nodes, we can test for connectivity by using Ansible to run the hostname command on each of the hosts within the etcd group:

      • ansible etcd -i hosts -m command -a hostname

      Let us break down this command to learn what each part means:

      • etcd: specifies the host pattern to use to determine which hosts from the inventory are being managed with this command. Here, we are using the group name as the host pattern.
      • -i hosts: specifies the inventory file to use.
      • -m command: the functionality behind Ansible is provided by modules. The command module takes the argument passed in and executes it as a command on each of the managed nodes. This tutorial will introduce a few more Ansible modules as we progress.
      • -a hostname: the argument to pass into the module. The number and types of arguments depend on the module.

      After running the command, you will find the following output, which means Ansible is configured correctly:

      Output

      etcd2 | CHANGED | rc=0 >> etcd2 etcd3 | CHANGED | rc=0 >> etcd3 etcd1 | CHANGED | rc=0 >> etcd1

      Each command that Ansible runs is called a task. Using ansible on the command line to run tasks is called running ad-hoc commands. The upside of ad-hoc commands is that they are quick and require little setup; the downside is that they run manually, and thus cannot be committed to a version control system like Git.

      A slight improvement would be to write a shell script and run our commands using Ansible’s script module. This would allow us to record the configuration steps we took into version control. However, shell scripts are imperative, which means we are responsible for figuring out the commands to run (the “how”s) to configure the system to the desired state. Ansible, on the other hand, advocates for a declarative approach, where we define “what” the desired state of our server should be inside configuration files, and Ansible is responsible for getting the server to that desired state.

      The declarative approach is preferred because the intent of the configuration file is immediately conveyed, meaning it’s easier to understand and maintain. It also places the onus of handling edge cases on Ansible instead of the administrator, saving us a lot of work.

      Now that you have configured the Ansible control node to communicate with the managed nodes, in the next step, we will introduce you to Ansible playbooks, which allow you to specify tasks in a declarative way.

      Step 2 — Getting the Hostnames of Managed Nodes with Ansible Playbooks

      In this step, we will replicate what was done in Step 1—printing out the hostnames of the managed nodes—but instead of running ad-hoc tasks, we will define each task declaratively as an Ansible playbook and run it. The purpose of this step is to demonstrate how Ansible playbooks work; we will carry out much more substantial tasks with playbooks in later steps.

      Inside your project directory, create a new file named playbook.yaml using your editor:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Inside playbook.yaml, add the following lines:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        tasks:
          - name: "Retrieve hostname"
            command: hostname
            register: output
          - name: "Print hostname"
            debug: var=output.stdout_lines
      

      Close and save the playbook.yaml file by pressing CTRL+X followed by Y.

      The playbook contains a list of plays; each play contains a list of tasks that should be run on all hosts matching the host pattern specified by the hosts key. In this playbook, we have one play that contains two tasks. The first task runs the hostname command using the command module and registers the output to a variable named output. In the second task, we use the debug module to print out the stdout_lines property of the output variable.

      We can now run this playbook using the ansible-playbook command:

      • ansible-playbook -i hosts playbook.yaml

      You will find the following output, which means your playbook is working correctly:

      Output

      PLAY [etcd] *********************************************************************************************************************** TASK [Gathering Facts] ************************************************************************************************************ ok: [etcd2] ok: [etcd3] ok: [etcd1] TASK [Retrieve hostname] ********************************************************************************************************** changed: [etcd2] changed: [etcd3] changed: [etcd1] TASK [Print hostname] ************************************************************************************************************* ok: [etcd1] => { "output.stdout_lines": [ "etcd1" ] } ok: [etcd2] => { "output.stdout_lines": [ "etcd2" ] } ok: [etcd3] => { "output.stdout_lines": [ "etcd3" ] } PLAY RECAP ************************************************************************************************************************ etcd1 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 etcd2 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 etcd3 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      Note: ansible-playbook sometimes uses cowsay as a playful way to print the headings. If you find a lot of ASCII-art cows printed on your terminal, now you know why. To disable this feature, set the ANSIBLE_NOCOWS environment variable to 1 prior to running ansible-playbook by running export ANSIBLE_NOCOWS=1 in your shell.

      In this step, we’ve moved from running imperative ad-hoc tasks to running declarative playbooks. In the next step, we will replace these two demo tasks with tasks that will set up our etcd cluster.

      Step 3 — Installing etcd on the Managed Nodes

      In this step, we will show you the commands to install etcd manually and demonstrate how to translate these same commands into tasks inside our Ansible playbook.

      etcd and its client etcdctl are available as binaries, which we’ll download, extract, and move to a directory that’s part of the PATH environment variable. When configured manually, these are the steps we would take on each of the managed nodes:

      • mkdir -p /opt/etcd/bin
      • cd /opt/etcd/bin
      • wget -qO- https://storage.googleapis.com/etcd/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz | tar --extract --gzip --strip-components=1
      • echo 'export PATH="$PATH:/opt/etcd/bin"' >> ~/.profile
      • echo 'export ETCDCTL_API=3" >> ~/.profile

      The first four commands download and extract the binaries to the /opt/etcd/bin/ directory. By default, the etcdctl client will use API v2 to communicate with the etcd server. Since we are running etcd v3.x, the last command sets the ETCDCTL_API environment variable to 3.

      Note: Here, we are using etcd v3.3.13 built for a machine with processors that use the AMD64 instruction set. You can find binaries for other systems and other versions on the the official GitHub Releases page.

      To replicate the same steps in a standardized format, we can add tasks to our playbook. Open the playbook.yaml playbook file in your editor:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Replace the entirety of the playbook.yaml file with the following contents:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        become: True
        tasks:
          - name: "Create directory for etcd binaries"
            file:
              path: /opt/etcd/bin
              state: directory
              owner: root
              group: root
              mode: 0700
          - name: "Download the tarball into the /tmp directory"
            get_url:
              url: https://storage.googleapis.com/etcd/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz
              dest: /tmp/etcd.tar.gz
              owner: root
              group: root
              mode: 0600
              force: True
          - name: "Extract the contents of the tarball"
            unarchive:
              src: /tmp/etcd.tar.gz
              dest: /opt/etcd/bin/
              owner: root
              group: root
              mode: 0600
              extra_opts:
                - --strip-components=1
              decrypt: True
              remote_src: True
          - name: "Set permissions for etcd"
            file:
              path: /opt/etcd/bin/etcd
              state: file
              owner: root
              group: root
              mode: 0700
          - name: "Set permissions for etcdctl"
            file:
              path: /opt/etcd/bin/etcdctl
              state: file
              owner: root
              group: root
              mode: 0700
          - name: "Add /opt/etcd/bin/ to the $PATH environment variable"
            lineinfile:
              path: /etc/profile
              line: export PATH="$PATH:/opt/etcd/bin"
              state: present
              create: True
              insertafter: EOF
          - name: "Set the ETCDCTL_API environment variable to 3"
            lineinfile:
              path: /etc/profile
              line: export ETCDCTL_API=3
              state: present
              create: True
              insertafter: EOF
      

      Each task uses a module; for this set of tasks, we are making use of the following modules:

      • file: to create the /opt/etcd/bin directory, and to later set the file permissions for the etcd and etcdctl binaries.
      • get_url: to download the gzipped tarball onto the managed nodes.
      • unarchive: to extract and unpack the etcd and etcdctl binaries from the gzipped tarball.
      • lineinfile: to add an entry into the .profile file.

      To apply these changes, close and save the playbook.yaml file by pressing CTRL+X followed by Y. Then, on the terminal, run the same ansible-playbook command again:

      • ansible-playbook -i hosts playbook.yaml

      The PLAY RECAP section of the output will show only ok and changed:

      Output

      ... PLAY RECAP ************************************************************************************************************************ etcd1 : ok=8 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 etcd2 : ok=8 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 etcd3 : ok=8 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      To confirm a correct installation of etcd, manually SSH into one of the managed nodes and run etcd and etcdctl:

      etcd1_public_ip is the public IP addresses of the server named etcd1. Once you have gained SSH access, run etcd --version to print out the version of etcd installed:

      You will find output similar to what’s shown in the following, which means the etcd binary is successfully installed:

      Output

      etcd Version: 3.3.13 Git SHA: 98d3084 Go Version: go1.10.8 Go OS/Arch: linux/amd64

      To confirm etcdctl is successfully installed, run etcdctl version:

      You will find output similar to the following:

      Output

      etcdctl version: 3.3.13 API version: 3.3

      Note that the output says API version: 3.3, which also confirms that our ETCDCTL_API environment variable was set correctly.

      Exit out of the etcd1 server to return to your local environment.

      We have now successfully installed etcd and etcdctl on all of our managed nodes. In the next step, we will add more tasks to our play to run etcd as a background service.

      Step 4 — Creating a Unit File for etcd

      The quickest way to run etcd with Ansible may appear to be to use the command module to run /opt/etcd/bin/etcd. However, this will not work because it will make etcd run as a foreground process. Using the command module will cause Ansible to hang as it waits for the etcd command to return, which it never will. So in this step, we are going to update our playbook to run our etcd binary as a background service instead.

      Ubuntu 18.04 uses systemd as its init system, which means we can create new services by writing unit files and placing them inside the /etc/systemd/system/ directory.

      First, inside our project directory, create a new directory named files/:

      Then, using your editor, create a new file named etcd.service within that directory:

      Next, copy the following code block into the files/etcd.service file:

      ~/playground/etcd-ansible/files/etcd.service

      [Unit]
      Description=etcd distributed reliable key-value store
      
      [Service]
      Type=notify
      ExecStart=/opt/etcd/bin/etcd
      Restart=always
      

      This unit file defines a service that runs the executable at /opt/etcd/bin/etcd, notifies systemd when it has finished initializing, and always restarts if it ever exits.

      Note: If you’d like to understand more about systemd and unit files, or want to tailor the unit file to your needs, read the Understanding Systemd Units and Unit Files guide.

      Close and save the files/etcd.service file by pressing CTRL+X followed by Y.

      Next, we need to add a task inside our playbook that will copy the files/etcd.service local file into the /etc/systemd/system/etcd.service directory for every managed node. We can do this using the copy module.

      Open up your playbook:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Append the following highlighted task to the end of our existing tasks:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        become: True
        tasks:
          ...
          - name: "Set the ETCDCTL_API environment variable to 3"
            lineinfile:
              path: /etc/profile
              line: export ETCDCTL_API=3
              state: present
              create: True
              insertafter: EOF
          - name: "Create a etcd service"
            copy:
              src: files/etcd.service
              remote_src: False
              dest: /etc/systemd/system/etcd.service
              owner: root
              group: root
              mode: 0644
      

      By copying the unit file into the /etc/systemd/system/etcd.service, a service is now defined.

      Save and exit the playbook.

      Run the same ansible-playbook command again to apply the new changes:

      • ansible-playbook -i hosts playbook.yaml

      To confirm the changes have been applied, first SSH into one of the managed nodes:

      Then, run systemctl status etcd to query systemd about the status of the etcd service:

      You will find the following output, which states that the service is loaded:

      Output

      ● etcd.service - etcd distributed reliable key-value store Loaded: loaded (/etc/systemd/system/etcd.service; static; vendor preset: enabled) Active: inactive (dead) ...

      Note: The last line (Active: inactive (dead)) of the output states that the service is inactive, which means it would not be automatically run when the system starts. This is expected and not an error.

      Press q to return to the shell, and then run exit to exit out of the managed node and back to your local shell:

      In this step, we updated our playbook to run the etcd binary as a systemd service. In the next step, we will continue to set up etcd by providing it space to store its data.

      Step 5 — Configuring the Data Directory

      etcd is a key-value data store, which means we must provide it with space to store its data. In this step, we are going to update our playbook to define a dedicated data directory for etcd to use.

      Open up your playbook:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Append the following task to the end of the list of tasks:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        become: True
        tasks:
          ...
          - name: "Create a etcd service"
            copy:
              src: files/etcd.service
              remote_src: False
              dest: /etc/systemd/system/etcd.service
              owner: root
              group: root
              mode: 0644
          - name: "Create a data directory"
            file:
              path: /var/lib/etcd/{{ inventory_hostname }}.etcd
              state: directory
              owner: root
              group: root
              mode: 0755
      

      Here, we are using /var/lib/etcd/hostname.etcd as the data directory, where hostname is the hostname of the current managed node. inventory_hostname is a variable that represents the hostname of the current managed node; its value is populated by Ansible automatically. The curly-braces syntax (i.e., {{ inventory_hostname }}) is used for variable substitution, supported by the Jinja2 template engine, which is the default templating engine for Ansible.

      Close the text editor and save the file.

      Next, we need to instruct etcd to use this data directory. We do this by passing in the data-dir parameter to etcd. To set etcd parameters, we can use a combination of environment variables, command-line flags, and configuration files. For this tutorial, we will use a configuration file, as it is much neater to isolate all configurations into a file, rather than have configuration littered across our playbook.

      In your project directory, create a new directory named templates/:

      Then, using your editor, create a new file named etcd.conf.yaml.j2 within the directory:

      • nano templates/etcd.conf.yaml.j2

      Next, copy the following line and paste it into the file:

      ~/playground/etcd-ansible/templates/etcd.conf.yaml.j2

      data-dir: /var/lib/etcd/{{ inventory_hostname }}.etcd
      

      This file uses the same Jinja2 variable substitution syntax as our playbook. To substitute the variables and upload the result to each managed host, we can use the template module. It works in a similar way to copy, except it will perform variable substitution prior to upload.

      Exit from etcd.conf.yaml.j2, then open up your playbook:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Append the following tasks to the list of tasks to create a directory and upload the templated configuration file into it:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        become: True
        tasks:
          ...
          - name: "Create a data directory"
            file:
              ...
              mode: 0755
          - name: "Create directory for etcd configuration"
            file:
              path: /etc/etcd
              state: directory
              owner: root
              group: root
              mode: 0755
          - name: "Create configuration file for etcd"
            template:
              src: templates/etcd.conf.yaml.j2
              dest: /etc/etcd/etcd.conf.yaml
              owner: root
              group: root
              mode: 0600
      

      Save and close this file.

      Because we’ve made this change, we need to update our service’s unit file to pass it the location of our configuration file (i.e., /etc/etcd/etcd.conf.yaml).

      Open the etcd service file on your local machine:

      Update the files/etcd.service file by adding the --config-file flag highlighted in the following:

      ~/playground/etcd-ansible/files/etcd.service

      [Unit]
      Description=etcd distributed reliable key-value store
      
      [Service]
      Type=notify
      ExecStart=/opt/etcd/bin/etcd --config-file /etc/etcd/etcd.conf.yaml
      Restart=always
      

      Save and close this file.

      In this step, we used our playbook to provide a data directory for etcd to store its data. In the next step, we will add a couple more tasks to restart the etcd service and have it run on startup.

      Step 6 — Enabling and Starting the etcd Service

      Whenever we make changes to the unit file of a service, we need to restart the service to have it take effect. We can do this by running the systemctl restart etcd command. Furthermore, to make the etcd service start automatically on system startup, we need to run systemctl enable etcd. In this step, we will run those two commands using the playbook.

      To run commands, we can use the command module:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Append the following tasks to the end of the task list:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        become: True
        tasks:
          ...
          - name: "Create configuration file for etcd"
            template:
              ...
              mode: 0600
          - name: "Enable the etcd service"
            command: systemctl enable etcd
          - name: "Start the etcd service"
            command: systemctl restart etcd
      

      Save and close the file.

      Run ansible-playbook -i hosts playbook.yaml once more:

      • ansible-playbook -i hosts playbook.yaml

      To check that the etcd service is now restarted and enabled, SSH into one of the managed nodes:

      Then, run systemctl status etcd to check the status of the etcd service:

      You will find enabled and active (running) as highlighted in the following; this means the changes we made in our playbook have taken effect:

      Output

      ● etcd.service - etcd distributed reliable key-value store Loaded: loaded (/etc/systemd/system/etcd.service; static; vendor preset: enabled) Active: active (running) Main PID: 19085 (etcd) Tasks: 11 (limit: 2362)

      In this step, we used the command module to run systemctl commands that restart and enable the etcd service on our managed nodes. Now that we have set up an etcd installation, we will, in the next step, test out its functionality by carry out some basic create, read, update, and delete (CRUD) operations.

      Step 7 — Testing etcd

      Although we have a working etcd installation, it is insecure and not yet ready for production use. But before we secure our etcd setup in later steps, let’s first understand what etcd can do in terms of functionality. In this step, we are going to manually send requests to etcd to add, retrieve, update, and delete data from it.

      By default, etcd exposes an API that listens on port 2379 for client communication. This means we can send raw API requests to etcd using an HTTP client. However, it’s quicker to use the official etcd client etcdctl, which allows you to create/update, retrieve, and delete key-value pairs using the put, get, and del subcommands, respectively.

      Make sure you’re still inside the etcd1 managed node, and run the following etcdctl commands to confirm your etcd installation is working.

      First, create a new entry using the put subcommand.

      The put subcommand has the following syntax:

      etcdctl put key value
      

      On etcd1, run the following command:

      The command we just ran instructs etcd to write the value "bar" to the key foo in the store.

      You will then find OK printed in the output, which indicates the data persisted:

      Output

      OK

      We can then retrieve this entry using the get subcommand, which has the syntax etcdctl get key:

      You will find this output, which shows the key on the first line and the value you inserted earlier on the second line:

      Output

      foo bar

      We can delete the entry using the del subcommand, which has the syntax etcdctl del key:

      You will find the following output, which indicates the number of entries deleted:

      Output

      1

      Now, let’s run the get subcommand once more in an attempt to retrieve a deleted key-value pair:

      You will not receive an output, which means etcdctl is unable to retrieve the key-value pair. This confirms that after the entry is deleted, it can no longer be retrieved.

      Now that you’ve tested the basic operations of etcd and etcdctl, let’s exit out of our managed node and back to your local environment:

      In this step, we used the etcdctl client to send requests to etcd. At this point, we are running three separate instances of etcd, each acting independently from each other. However, etcd is designed as a distributed key-value store, which means multiple etcd instances can group up to form a single cluster; each instance then becomes a member of the cluster. After forming a cluster, you would be able to retrieve a key-value pair that was inserted from a different member of the cluster. In the next step, we will use our playbook to transform our 3 single-node clusters into a single 3-node cluster.

      Step 8 — Forming a Cluster Using Static Discovery

      To create one 3-node cluster instead of three 1-node clusters, we must configure these etcd installations to communicate with each other. This means each one must know the IP addresses of the others. This process is called discovery. Discovery can be done using either static configuration or dynamic service discovery. In this step, we will discuss the difference between the two, as well as update our playbook to set up an etcd cluster using static discovery.

      Discovery by static configuration is the method that requires the least setup; this is where the endpoints of each member are passed into the etcd command before it is executed. To use static configuration, the following conditions must be met prior to the initialization of the cluster:

      • the number of members are known
      • the endpoints of each member are known
      • the IP addresses for all endpoints are static

      If these conditions cannot be met, then you can use a dynamic discovery service. With dynamic service discovery, all instances would register with the discovery service, which allows each member to retrieve information about the location of other members.

      Since we know we want a 3-node etcd cluster, and all our servers have static IP addresses, we will use static discovery. To initiate our cluster using static discovery, we must add several parameters to our configuration file. Use an editor to open up the templates/etcd.conf.yaml.j2 template file:

      • nano templates/etcd.conf.yaml.j2

      Then, add the following highlighted lines:

      ~/playground/etcd-ansible/templates/etcd.conf.yaml.j2

      data-dir: /var/lib/etcd/{{ inventory_hostname }}.etcd
      name: {{ inventory_hostname }}
      initial-advertise-peer-urls: http://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2380
      listen-peer-urls: http://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2380,http://127.0.0.1:2380
      advertise-client-urls: http://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2379
      listen-client-urls: http://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2379,http://127.0.0.1:2379
      initial-cluster-state: new
      initial-cluster: {% for host in groups['etcd'] %}{{ hostvars[host]['ansible_facts']['hostname'] }}=http://{{ hostvars[host]['ansible_facts']['eth1']['ipv4']['address'] }}:2380{% if not loop.last %},{% endif %}{% endfor %}
      

      Close and save the templates/etcd.conf.yaml.j2 file by pressing CTRL+X followed by Y.

      Here’s a brief explanation of each parameter:

      • name – a human-readable name for the member. By default, etcd uses a unique, randomly-generated ID to identify each member; however, a human-readable name allows us to reference it more easily inside configuration files and on the command line. Here, we will use the hostnames as the member names (i.e., etcd1, etcd2, and etcd3).
      • initial-advertise-peer-urls – a list of IP address/port combinations that other members can use to communicate with this member. In addition to the API port (2379), etcd also exposes port 2380 for peer communication between etcd members, which allows them to send messages to each other and exchange data. Note that these URLs must be reachable by its peers (and not be a local IP address).
      • listen-peer-urls – a list of IP address/port combinations where the current member will listen for communication from other members. This must include all the URLs from the --initial-advertise-peer-urls flag, but also local URLs like 127.0.0.1:2380. The destination IP address/port of incoming peer messages must match one of the URLs listed here.
      • advertise-client-urls – a list of IP address/port combinations that clients should use to communicate with this member. These URLs must be reachable by the client (and not be a local address). If the client is accessing the cluster over public internet, this must be a public IP address.
      • listen-client-urls – a list of IP address/port combinations where the current member will listen for communication from clients. This must include all the URLs from the --advertise-client-urls flag, but also local URLs like 127.0.0.1:2379. The destination IP address/port of incoming client messages must match one of the URLs listed here.
      • initial-cluster – a list of endpoints for each member of the cluster. Each endpoint must match one of the corresponding member’s initial-advertise-peer-urls URLs.
      • initial-cluster-state – either new or existing.

      To ensure consistency, etcd can only make decisions when a majority of the nodes are healthy. This is known as establishing quorum. In other words, in a three-member cluster, quorum is reached if two or more of the members are healthy.

      If the initial-cluster-state parameter is set to new, etcd will know that this is a new cluster being bootstrapped, and will allow members to start in parallel, without waiting for quorum to be reached. More concretely, after the first member is started, it will not have quorum because one third (33.33%) is less than or equal to 50%. Normally, etcd will halt and refuse to commit any more actions and the cluster will never be formed. However, with initial-cluster-state set to new, it will ignore the initial lack of quorum.

      If set to existing, the member will try to join an existing cluster, and expects quorum to already be established.

      Note: You can find more details about all supported configuration flags in the Configuration section of etcd’s documentation.

      In the updated templates/etcd.conf.yaml.j2 template file, there are a few instances of hostvars. When Ansible runs, it will collect variables from a variety of sources. We have already made use of the inventory_hostname variable before, but there are a lot more available. These variables are available under hostvars[inventory_hostname]['ansible_facts']. Here, we are extracting the private IP addresses of each node and using it to construct our parameter value.

      Note: Because we enabled the Private Networking option when we created our servers, each server would have three IP addresses associated with them:

      • A loopback IP address – an address that is only valid inside the same machine. It is used for the machine to refer to itself, e.g., 127.0.0.1
      • A public IP address – an address that is routable over the public internet, e.g., 178.128.169.51
      • A private IP address – an address that is routable only within the private network; in the case of DigitalOcean Droplets, there’s a private network within each datacenter, e.g., 10.131.82.225

      Each of these IP addresses are associated with a different network interface—the loopback address is associated with the lo interface, the public IP address is associated with the eth0 interface, and the private IP address with the eth1 interface. We are using the eth1 interface so that all traffic stays within the private network, without ever reaching the internet.

      Understanding of network interfaces is not required for this article, but if you’d like to learn more, An Introduction to Networking Terminology, Interfaces, and Protocols is a great place to start.

      The {% %} Jinja2 syntax defines the for loop structure that iterates through every node in the etcd group to build up the initial-cluster string into a format required by etcd.

      To form the new three-member cluster, you must first stop the etcd service and clear the data directory before launching the cluster. To do this, use an editor to open up the playbook.yaml file on your local machine:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Then, before the "Create a data directory" task, add a task to stop the etcd service:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        become: True
        tasks:
          ...
              group: root
              mode: 0644
          - name: "Stop the etcd service"
            command: systemctl stop etcd
          - name: "Create a data directory"
            file:
          ...
      

      Next, update the "Create a data directory" task to first delete the data directory and recreate it:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        become: True
        tasks:
          ...
          - name: "Stop the etcd service"
            command: systemctl stop etcd
          - name: "Create a data directory"
            file:
              path: /var/lib/etcd/{{ inventory_hostname }}.etcd
              state: "{{ item }}"
              owner: root
              group: root
              mode: 0755
            with_items:
              - absent
              - directory
          - name: "Create directory for etcd configuration"
            file:
          ...
      

      The with_items property defines a list of strings that this task will iterate over. It is equivalent to repeating the same task twice but with different values for the state property. Here, we are iterating over the list with items absent and directory, which ensures that the data directory is deleted first and then re-created after.

      Close and save the playbook.yaml file by pressing CTRL+X followed by Y. Then, run ansible-playbook again. Ansible will now create a single, 3-member etcd cluster:

      • ansible-playbook -i hosts playbook.yaml

      You can check this by SSH-ing into any etcd member node:

      Then run etcdctl endpoint health --cluster:

      • etcdctl endpoint health --cluster

      This will list out the health of each member of the cluster:

      Output

      http://etcd2_private_ip:2379 is healthy: successfully committed proposal: took = 2.517267ms http://etcd1_private_ip:2379 is healthy: successfully committed proposal: took = 2.153612ms http://etcd3_private_ip:2379 is healthy: successfully committed proposal: took = 2.639277ms

      We have now successfully created a 3-node etcd cluster. We can confirm this by adding an entry to etcd on one member node, and retrieving it on another member node. On one of the member nodes, run etcdctl put:

      Then, use a new terminal to SSH into a different member node:

      Next, attempt to retrieve the same entry using the key:

      You will be able to retrieve the entry, which proves that the cluster is working:

      Output

      foo bar

      Lastly, exit out of each of the managed nodes and back to your local machine:

      In this step, we provisioned a new 3-node cluster. At the moment, communication between etcd members and their peers and clients are conducted through HTTP. This means the communication is unencrypted and any party who can intercept the traffic can read the messages. This is not a big issue if the etcd cluster and clients are all deployed within a private network or virtual private network (VPN) which you fully control. However, if any of the traffic needs to travel through a shared network (private or public), then you should ensure this traffic is encrypted. Furthermore, a mechanism needs to be put in place for a client or peer to verify the authenticity of the server.

      In the next step, we will look at how to secure client-to-server as well as peer communication using TLS.

      Step 9 — Obtaining the Private IP Addresses of Managed Nodes

      To encrypt messages between member nodes, etcd uses Hypertext Transfer Protocol Secure, or HTTPS, which is a layer on top of the Transport Layer Security, or TLS, protocol. TLS uses a system of private keys, certificates, and trusted entities called Certificate Authorities (CAs) to authenticate with, and send encrypted messages to, each other.

      In this tutorial, each member node needs to generate a certificate to identify itself, and have this certificate signed by a CA. We will configure all member nodes to trust this CA, and thus also trust any certificates it signs. This allows member nodes to mutually authenticate with each other.

      The certificate that a member node generates must allow other member nodes to identify itself. All certificates include the Common Name (CN) of the entity it is associated with. This is often used as the identity of the entity. However, when verifying a certificate, client implementations may compare whether the information it collected about the entity match what was given in the certificate. For example, when a client downloads the TLS certificate with the subject of CN=foo.bar.com, but the client is actually connecting to the server using an IP address (e.g., 167.71.129.110), then there’s a mismatch and the client may not trust the certificate. By specifying a subject alternative name (SAN) in the certificate, it informs the verifier that both names belong to the same entity.

      Because our etcd members are peering with each other using their private IP addresses, when we define our certificates, we’ll need to provide these private IP addresses as the subject alternative names.

      To find out the private IP address of a managed node, SSH into it:

      Then run the following command:

      • ip -f inet addr show eth1

      You’ll find output similar to the following lines:

      Output

      3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 inet 10.131.255.176/16 brd 10.131.255.255 scope global eth1 valid_lft forever preferred_lft forever

      In our example output, 10.131.255.176 is the private IP address of the managed node, and the only information we are interested in. To filter out everything else apart from the private IP, we can pipe the output of the ip command to the sed utility, which is used to filter and transform text.

      • ip -f inet addr show eth1 | sed -En -e 's/.*inet ([0-9.]+).*/1/p'

      Now, the only output is the private IP address itself:

      Output

      10.131.255.176

      Once you’re satisfied that the preceding command works, exit out of the managed node:

      To incorporate the preceding commands into our playbook, first open up the playbook.yaml file:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Then, add a new play with a single task before our existing play:

      ~/playground/etcd-ansible/playbook.yaml

      ...
      - hosts: etcd
        tasks:
          - shell: ip -f inet addr show eth1 | sed -En -e 's/.*inet ([0-9.]+).*/1/p'
            register: privateIP
      - hosts: etcd
        tasks:
      ...
      

      The task uses the shell module to run the ip and sed commands, which fetches the private IP address of the managed node. It then registers the return value of the shell command inside a variable named privateIP, which we will use later.

      In this step, we added a task to the playbook to obtain the private IP address of the managed nodes. In the next step, we are going to use this information to generate certificates for each member node, and have these certificates signed by a Certificate Authority (CA).

      Step 10 — Generating etcd Members’ Private Keys and CSRs

      In order for a member node to receive encrypted traffic, the sender must use the member node’s public key to encrypt the data, and the member node must use its private key to decrypt the ciphertext and retrieve the original data. The public key is packaged into a certificate and signed by a CA to ensure that it is genuine.

      Therefore, we will need to generate a private key and certificate signing request (CSR) for each etcd member node. To make it easier for us, we will generate all key pairs and sign all certificates locally, on the control node, and then copy the relevant files to the managed hosts.

      First, create a directory called artifacts/, where we’ll place the files (keys and certificates) generated during the process. Open the playbook.yaml file with an editor:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      In it, use the file module to create the artifacts/ directory:

      ~/playground/etcd-ansible/playbook.yaml

      ...
          - shell: ip -f inet addr show eth1 | sed -En -e 's/.*inet ([0-9.]+).*/1/p'
            register: privateIP
      - hosts: localhost
        gather_facts: False
        become: False
        tasks:
          - name: "Create ./artifacts directory to house keys and certificates"
            file:
              path: ./artifacts
              state: directory
      - hosts: etcd
        tasks:
      ...
      

      Next, add another task to the end of the play to generate the private key:

      ~/playground/etcd-ansible/playbook.yaml

      ...
      - hosts: localhost
        gather_facts: False
        become: False
        tasks:
              ...
          - name: "Generate private key for each member"
            openssl_privatekey:
              path: ./artifacts/{{item}}.key
              type: RSA
              size: 4096
              state: present
              force: True
            with_items: "{{ groups['etcd'] }}"
      - hosts: etcd
        tasks:
      ...
      

      Creating private keys and CSRs can be done using the openssl_privatekey and openssl_csr modules, respectively.

      The force: True attribute ensures that the private key is regenerated each time, even if it exists already.

      Similarly, append the following new task to the same play to generate the CSRs for each member, using the openssl_csr module:

      ~/playground/etcd-ansible/playbook.yaml

      ...
      - hosts: localhost
        gather_facts: False
        become: False
        tasks:
          ...
          - name: "Generate private key for each member"
            openssl_privatekey:
              ...
            with_items: "{{ groups['etcd'] }}"
          - name: "Generate CSR for each member"
            openssl_csr:
              path: ./artifacts/{{item}}.csr
              privatekey_path: ./artifacts/{{item}}.key
              common_name: "{{item}}"
              key_usage:
                - digitalSignature
              extended_key_usage:
                - serverAuth
              subject_alt_name:
                - IP:{{ hostvars[item]['privateIP']['stdout']}}
                - IP:127.0.0.1
              force: True
            with_items: "{{ groups['etcd'] }}"
      

      We are specifying that this certificate can be involved in a digital signature mechanism for the purpose of server authentication. This certificate is associated with the hostname (e.g., etcd1), but the verifier should also treat each node’s private and local loopback IP addresses as alternative names. Note that we are using the privateIP variable that we registered in the previous play.

      Close and save the playbook.yaml file by pressing CTRL+X followed by Y. Then, run our playbook again:

      • ansible-playbook -i hosts playbook.yaml

      We will now find a new directory called artifacts within our project directory; use ls to list out its contents:

      You will find the private keys and CSRs for each of the etcd members:

      Output

      etcd1.csr etcd1.key etcd2.csr etcd2.key etcd3.csr etcd3.key

      In this step, we used several Ansible modules to generate private keys and public key certificates for each of the member nodes. In the next step, we will look at how to sign a certificate signing request (CSR).

      Step 11 — Generating CA Certificates

      Within an etcd cluster, member nodes encrypt messages using the receiver’s public key. To ensure the public key is genuine, the receiver packages the public key into a certificate signing request (CSR) and has a trusted entity (i.e., the CA) sign the CSR. Since we control all the member nodes and the CAs they trust, we don’t need to use an external CA and can act as our own CA. In this step, we are going to act as our own CA, which means we’ll need to generate a private key and a self-signed certificate to function as the CA.

      First, open the playbook.yaml file with your editor:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Then, similar to the previous step, append a task to the localhost play to generate a private key for the CA:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: localhost
        ...
        tasks:
          ...
        - name: "Generate CSR for each member"
          ...
          with_items: "{{ groups['etcd'] }}"
          - name: "Generate private key for CA"
            openssl_privatekey:
              path: ./artifacts/ca.key
              type: RSA
              size: 4096
              state: present
              force: True
      - hosts: etcd
        become: True
        tasks:
          - name: "Create directory for etcd binaries"
      ...
      

      Next, use the openssl_csr module to generate a new CSR. This is similar to the previous step, but in this CSR, we are adding the basic constraint and key usage extension to indicate that this certificate can be used as a CA certificate:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: localhost
        ...
        tasks:
          ...
          - name: "Generate private key for CA"
            openssl_privatekey:
              path: ./artifacts/ca.key
              type: RSA
              size: 4096
              state: present
              force: True
          - name: "Generate CSR for CA"
            openssl_csr:
              path: ./artifacts/ca.csr
              privatekey_path: ./artifacts/ca.key
              common_name: ca
              organization_name: "Etcd CA"
              basic_constraints:
                - CA:TRUE
                - pathlen:1
              basic_constraints_critical: True
              key_usage:
                - keyCertSign
                - digitalSignature
              force: True
      - hosts: etcd
        become: True
        tasks:
          - name: "Create directory for etcd binaries"
      ...
      

      Lastly, use the openssl_certificate module to self-sign the CSR:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: localhost
        ...
        tasks:
          ...
          - name: "Generate CSR for CA"
            openssl_csr:
              path: ./artifacts/ca.csr
              privatekey_path: ./artifacts/ca.key
              common_name: ca
              organization_name: "Etcd CA"
              basic_constraints:
                - CA:TRUE
                - pathlen:1
              basic_constraints_critical: True
              key_usage:
                - keyCertSign
                - digitalSignature
              force: True
          - name: "Generate self-signed CA certificate"
            openssl_certificate:
              path: ./artifacts/ca.crt
              privatekey_path: ./artifacts/ca.key
              csr_path: ./artifacts/ca.csr
              provider: selfsigned
              force: True
      - hosts: etcd
        become: True
        tasks:
          - name: "Create directory for etcd binaries"
      ...
      

      Close and save the playbook.yaml file by pressing CTRL+X followed by Y. Then, run our playbook again to apply the changes:

      • ansible-playbook -i hosts playbook.yaml

      You can also run ls to check the contents of the artifacts/ directory:

      You will now find the freshly generated CA certificate (ca.crt):

      Output

      ca.crt ca.csr ca.key etcd1.csr etcd1.key etcd2.csr etcd2.key etcd3.csr etcd3.key

      In this step, we generated a private key and a self-signed certificate for the CA. In the next step, we will use the CA certificate to sign each member’s CSR.

      Step 12 — Signing the etcd members’ CSRs

      In this step, we are going to sign each member node’s CSR. This will be similar to how we used the openssl_certificate module to self-sign the CA certificate, but instead of using the selfsigned provider, we will use the ownca provider, which allows us to sign using our own CA certificate.

      Open up your playbook:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      Append the following highlighted task to the "Generate self-signed CA certificate" task:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: localhost
        ...
        tasks:
          ...
          - name: "Generate self-signed CA certificate"
            openssl_certificate:
              path: ./artifacts/ca.crt
              privatekey_path: ./artifacts/ca.key
              csr_path: ./artifacts/ca.csr
              provider: selfsigned
              force: True
          - name: "Generate an `etcd` member certificate signed with our own CA certificate"
            openssl_certificate:
              path: ./artifacts/{{item}}.crt
              csr_path: ./artifacts/{{item}}.csr
              ownca_path: ./artifacts/ca.crt
              ownca_privatekey_path: ./artifacts/ca.key
              provider: ownca
              force: True
            with_items: "{{ groups['etcd'] }}"
      - hosts: etcd
        become: True
        tasks:
          - name: "Create directory for etcd binaries"
      ...
      

      Close and save the playbook.yaml file by pressing CTRL+X followed by Y. Then, run the playbook again to apply the changes:

      • ansible-playbook -i hosts playbook.yaml

      Now, list out the contents of the artifacts/ directory:

      You will find the private key, CSR, and certificate for every etcd member and the CA:

      Output

      ca.crt ca.csr ca.key etcd1.crt etcd1.csr etcd1.key etcd2.crt etcd2.csr etcd2.key etcd3.crt etcd3.csr etcd3.key

      In this step, we have signed each member node’s CSRs using the CA’s key. In the next step, we are going to copy the relevant files into each managed node, so that etcd has access to the relevant keys and certificates to set up TLS connections.

      Step 13 — Copying Private Keys and Certificates

      Every node needs to have a copy of the CA’s self-signed certificate (ca.crt). Each etcd member node also needs to have its own private key and certificate. In this step, we are going to upload these files and place them in a new /etc/etcd/ssl/ directory.

      To start, open the playbook.yaml file with your editor:

      • nano $HOME/playground/etcd-ansible/playbook.yaml

      To make these changes on our Ansible playbook, first update the path property of the Create directory for etcd configuration task to create the /etc/etcd/ssl/ directory:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        ...
        tasks:
          ...
            with_items:
              - absent
              - directory
          - name: "Create directory for etcd configuration"
            file:
              path: "{{ item }}"
              state: directory
              owner: root
              group: root
              mode: 0755
            with_items:
              - /etc/etcd
              - /etc/etcd/ssl
          - name: "Create configuration file for etcd"
            template:
      ...
      

      Then, following the modified task, add three more tasks to copy the files over:

      ~/playground/etcd-ansible/playbook.yaml

      - hosts: etcd
        ...
        tasks:
          ...
          - name: "Copy over the CA certificate"
            copy:
              src: ./artifacts/ca.crt
              remote_src: False
              dest: /etc/etcd/ssl/ca.crt
              owner: root
              group: root
              mode: 0644
          - name: "Copy over the `etcd` member certificate"
            copy:
              src: ./artifacts/{{inventory_hostname}}.crt
              remote_src: False
              dest: /etc/etcd/ssl/server.crt
              owner: root
              group: root
              mode: 0644
          - name: "Copy over the `etcd` member key"
            copy:
              src: ./artifacts/{{inventory_hostname}}.key
              remote_src: False
              dest: /etc/etcd/ssl/server.key
              owner: root
              group: root
              mode: 0600
          - name: "Create configuration file for etcd"
            template:
      ...
      

      Close and save the playbook.yaml file by pressing CTRL+X followed by Y.

      Run ansible-playbook again to make these changes:

      • ansible-playbook -i hosts playbook.yaml

      In this step, we have successfully uploaded the private keys and certificates to the managed nodes. Having copied the files over, we now need to update our etcd configuration file to make use of them.

      Step 14 — Enabling TLS on etcd

      In the last step of this tutorial, we are going to update some Ansible configurations to enable TLS in an etcd cluster.

      First, open up the templates/etcd.conf.yaml.j2 template file using your editor:

      • nano $HOME/playground/etcd-ansible/templates/etcd.conf.yaml.j2

      Once inside, change all URLs to use https as the protocol instead of http. Additionally, add a section at the end of the template to specify the location of the CA certificate, server certificate, and server key:

      ~/playground/etcd-ansible/templates/etcd.conf.yaml.j2

      data-dir: /var/lib/etcd/{{ inventory_hostname }}.etcd
      name: {{ inventory_hostname }}
      initial-advertise-peer-urls: https://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2380
      listen-peer-urls: https://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2380,https://127.0.0.1:2380
      advertise-client-urls: https://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2379
      listen-client-urls: https://{{ hostvars[inventory_hostname]['ansible_facts']['eth1']['ipv4']['address'] }}:2379,https://127.0.0.1:2379
      initial-cluster-state: new
      initial-cluster: {% for host in groups['etcd'] %}{{ hostvars[host]['ansible_facts']['hostname'] }}=https://{{ hostvars[host]['ansible_facts']['eth1']['ipv4']['address'] }}:2380{% if not loop.last %},{% endif %}{% endfor %}
      
      client-transport-security:
        cert-file: /etc/etcd/ssl/server.crt
        key-file: /etc/etcd/ssl/server.key
        trusted-ca-file: /etc/etcd/ssl/ca.crt
      peer-transport-security:
        cert-file: /etc/etcd/ssl/server.crt
        key-file: /etc/etcd/ssl/server.key
        trusted-ca-file: /etc/etcd/ssl/ca.crt
      

      Close and save the templates/etcd.conf.yaml.j2 file.

      Next, run your Ansible playbook:

      • ansible-playbook -i hosts playbook.yaml

      Then, SSH into one of the managed nodes:

      Once inside, run the etcdctl endpoint health command to check whether the endpoints are using HTTPS, and if all members are healthy:

      • etcdctl --cacert /etc/etcd/ssl/ca.crt endpoint health --cluster

      Because our CA certificate is not, by default, a trusted root CA certificate installed in the /etc/ssl/certs/ directory, we need to pass it to etcdctl using the --cacert flag.

      This will give the following output:

      Output

      https://etcd3_private_ip:2379 is healthy: successfully committed proposal: took = 19.237262ms https://etcd1_private_ip:2379 is healthy: successfully committed proposal: took = 4.769088ms https://etcd2_private_ip:2379 is healthy: successfully committed proposal: took = 5.953599ms

      To confirm that the etcd cluster is actually working, we can, once again, create an entry on one member node, and retrieve it from another member node:

      • etcdctl --cacert /etc/etcd/ssl/ca.crt put foo "bar"

      Use a new terminal to SSH into a different node:

      Now retrieve the same entry using the key foo:

      • etcdctl --cacert /etc/etcd/ssl/ca.crt get foo

      This will return the entry, showing the output below:

      Output

      foo bar

      You can do the same on the third node to ensure all three members are operational.

      Conclusion

      You have now successfully provisioned a 3-node etcd cluster, secured it with TLS, and confirmed that it is working.

      etcd is a tool originally created by CoreOS. To understand etcd’s usage in relation to CoreOS, you can read How To Use Etcdctl and Etcd, CoreOS’s Distributed Key-Value Store. The article also guides you through setting up a dynamic discovery model, something which was discussed but not demonstrated in this tutorial.

      As mentioned at the beginning of this tutorial, etcd is an important part of the Kubernetes ecosystem. To learn more about Kubernetes and etcd’s role within it, you can read An Introduction to Kubernetes. If you are deploying etcd as part of a Kubernetes cluster, know that there are other tools available, such as kubespray and kubeadm. For more details on the latter, you can read How To Create a Kubernetes Cluster Using Kubeadm on Ubuntu 18.04.

      Finally, this tutorial made use of many tools, but could not dive into each in too much detail. In the following you’ll find links that will provide a more detailed examination of each tool:



      Source link

      How To Scale and Secure a Django Application with Docker, Nginx, and Let’s Encrypt


      Introduction

      In cloud-based environments, there are multiple ways to scale and secure a Django application. By scaling horizontally, and running several copies of your app, you can build a more fault-tolerant and highly-available system, while also increasing its throughput so that requests can be processed simultaneously. One way to horizontally scale a Django app is to provision additional app servers that run your Django application and its WSGI HTTP server (like Gunicorn or uWSGI). To route and distribute incoming requests across this set of app servers, you can use a load balancer and reverse proxy like Nginx. Nginx can also cache static content and terminate Transport Layer Security (TLS) connections, used to provide HTTPS and secure connections to your app.

      Running your Django application and Nginx proxy inside of Docker containers ensures that these components behave the same way regardless of the environment they are deployed into. In addition, containers provide many features that facilitate packaging and configuring your application.

      In this tutorial, you’ll horizontally scale a containerized Django and Gunicorn Polls application by provisioning two application servers that will each run a copy of a Django and Gunicorn app container.

      You’ll also enable HTTPS by provisioning and configuring a third proxy server that will run an Nginx reverse proxy container and a Certbot client container. Certbot will provision TLS certificates for Nginx from the Let’s Encrypt certificate authority. This will ensure that your site receives a high security rating from SSL Labs. This proxy server will receive all of your app’s external requests and sit in front of the two upstream Django application servers. Finally, you’ll harden this distributed system by restricting external access to only the proxy server.

      Prerequisites

      To follow this tutorial, you will need:

      • Three Ubuntu 18.04 servers:

        • Two servers will be application servers, used to run your Django and Gunicorn app.
        • One server will be a proxy server, used to run Nginx and Certbot.
        • All should have a non-root user with sudo privileges, and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
      • Docker installed on all three servers. For guidance on installing Docker, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04.

      • A registered domain name. This tutorial will use your_domain.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.

      • An A DNS record with your_domain.com pointing to your proxy server’s public IP address. You can follow this introduction to DigitalOcean DNS for details on how to add it to a DigitalOcean account, if that’s what you’re using.

      • An S3 object storage bucket such as a DigitalOcean Space to store your Django project’s static files and a set of Access Keys for this Space. To learn how to create a Space, consult the How to Create Spaces product documentation. To learn how to create Access Keys for Spaces, consult Sharing Access to Spaces with Access Keys. With minor changes, you can use any object storage service that the django-storages plugin supports.

      • A DigitalOcean Managed PostgreSQL cluster. To learn how to create a cluster, consult the DigitalOcean Managed Databases product documentation. With minor changes, you can use any database that Django supports.

        • A PostgreSQL database called polls (or another memorable name to input in your config files below) and user for your Django app. For guidance on creating these, follow Step 1 of How to Build a Django and Gunicorn Application with Docker. You can perform these steps from any of the three servers.

      Step 1 — Configuring the First Django Application Server

      To begin, we’ll clone the Django application repository onto the first app server. Then, we’ll configure and build the application Docker image, and test the application by running the Django container.

      Note: If you’re continuing from How to Build a Django and Gunicorn Application with Docker, you will have already completed Step 1 and can skip ahead to Step 2 to configure the second app server.

      Start by logging in to the first of the two Django application servers and using git to clone the polls-docker branch of the Django Tutorial Polls App GitHub repository. This repo contains code for the Django documentation’s sample Polls application. The polls-docker branch contains a Dockerized version of the Polls app. To learn how the Polls app was modified to work effectively in a containerized environment, please see How to Build a Django and Gunicorn Application with Docker.

      • git clone --single-branch --branch polls-docker https://github.com/do-community/django-polls.git

      Navigate into the django-polls directory:

      cd django-polls
      

      This directory contains the Django application Python code, a Dockerfile that Docker will use to build the container image, as well as an env file that contains a list of environment variables to be passed into the container’s running environment. Inspect the Dockerfile using cat:

      cat Dockerfile
      

      Output

      FROM python:3.7.4-alpine3.10 ADD django-polls/requirements.txt /app/requirements.txt RUN set -ex && apk add --no-cache --virtual .build-deps postgresql-dev build-base && python -m venv /env && /env/bin/pip install --upgrade pip && /env/bin/pip install --no-cache-dir -r /app/requirements.txt && runDeps="$(scanelf --needed --nobanner --recursive /env | awk '{ gsub(/,/, "nso:", $2); print "so:" $2 }' | sort -u | xargs -r apk info --installed | sort -u)" && apk add --virtual rundeps $runDeps && apk del .build-deps ADD django-polls /app WORKDIR /app ENV VIRTUAL_ENV /env ENV PATH /env/bin:$PATH EXPOSE 8000 CMD ["gunicorn", "--bind", ":8000", "--workers", "3", "mysite.wsgi"]

      This Dockerfile uses the official Python 3.7.4 Docker image as a base, and installs Django and Gunicorn’s Python package requirements, as defined in the django-polls/requirements.txt file. It then removes some unnecessary build files, copies the application code into the image, and sets the execution PATH. Finally, it declares that port 8000 will be used to accept incoming container connections, and runs gunicorn with 3 workers, listening on port 8000.

      To learn more about each of the steps in this Dockerfile, please see Step 6 of How to Build a Django and Gunicorn Application with Docker.

      Now, build the image using docker build:

      We name the image polls using the -t flag and pass in the current directory as a build context, the set of files to reference when constructing the image.

      After Docker builds and tags the image, list available images using docker images:

      docker images
      

      You should see the polls image listed:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE polls latest 80ec4f33aae1 2 weeks ago 197MB python 3.7.4-alpine3.10 f309434dea3a 8 months ago 98.7MB

      Before we run the Django container, we need to configure its running environment using the env file present in the current directory. This file will be passed into the docker run command used to run the container, and Docker will inject the configured environment variables into the container’s running environment.

      Open the env file with nano or your favorite editor:

      nano env
      

      We’ll be configuring the file like so, and you’ll need to add some additional values as outlined below.

      django-polls/env

      DJANGO_SECRET_KEY=
      DEBUG=True
      DJANGO_ALLOWED_HOSTS=
      DATABASE_ENGINE=postgresql_psycopg2
      DATABASE_NAME=polls
      DATABASE_USERNAME=
      DATABASE_PASSWORD=
      DATABASE_HOST=
      DATABASE_PORT=
      STATIC_ACCESS_KEY_ID=
      STATIC_SECRET_KEY=
      STATIC_BUCKET_NAME=
      STATIC_ENDPOINT_URL=
      DJANGO_LOGLEVEL=info
      

      Fill in the missing values for the following keys:

      • DJANGO_SECRET_KEY: Set this to a unique, unpredictable value, as detailed in the Django docs. One method of generating this key is provided in Adjusting the App Settings of the Scalable Django App tutorial.
      • DJANGO_ALLOWED_HOSTS: This variable secures the app and prevents HTTP Host header attacks. For testing purposes, set this to *, a wildcard that will match all hosts. In production you should set this to your_domain.com. To learn more about this Django setting, consult Core Settings from the Django docs.
      • DATABASE_USERNAME: Set this to the PostgreSQL database user created in the prerequisite steps.
      • DATABASE_PASSWORD: Set this to the PostgreSQL user password created in the prerequisite steps.
      • DATABASE_HOST: Set this to your database’s hostname.
      • DATABASE_PORT: Set this to your database’s port.
      • STATIC_ACCESS_KEY_ID: Set this to your S3 bucket or Space’s access key.
      • STATIC_SECRET_KEY: Set this to your S3 bucket or Space’s access key Secret.
      • STATIC_BUCKET_NAME: Set this to your S3 bucket or Space name.
      • STATIC_ENDPOINT_URL: Set this to the appropriate S3 bucket or Space endpoint URL, for example https://space-name.nyc3.digitaloceanspaces.com if your Space is located in the nyc3 region.

      Once you’ve finished editing, save and close the file.

      We’ll now use docker run to override the CMD set in the Dockerfile and create the database schema using the manage.py makemigrations and manage.py migrate commands:

      docker run --env-file env polls sh -c "python manage.py makemigrations && python manage.py migrate"
      

      We run the polls:latest container image, pass in the environment variable file we just modified, and override the Dockerfile command with sh -c "python manage.py makemigrations && python manage.py migrate", which will create the database schema defined by the app code. If you’re running this for the first time you should see:

      Output

      No changes detected Operations to perform: Apply all migrations: admin, auth, contenttypes, polls, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying polls.0001_initial... OK Applying sessions.0001_initial... OK

      This indicates that the database schema has successfully been created.

      If you’re running migrate a subsequent time, Django will perform a no-op unless the database schema has changed.

      Next, we’ll run another instance of the app container and use an interactive shell inside of it to create an administrative user for the Django project.

      docker run -i -t --env-file env polls sh
      

      This will provide you with a shell prompt inside of the running container which you can use to create the Django user:

      python manage.py createsuperuser
      

      Enter a username, email address, and password for your user, and after creating the user, hit CTRL+D to quit the container and kill it.

      Finally, we’ll generate the static files for the app and upload them to the DigitalOcean Space using collectstatic. Note that this may take a bit of time to complete.

      docker run --env-file env polls sh -c "python manage.py collectstatic --noinput"
      

      After these files are generated and uploaded, you’ll receive the following output.

      Output

      121 static files copied.

      We can now run the app:

      docker run --env-file env -p 80:8000 polls
      

      Output

      [2019-10-17 21:23:36 +0000] [1] [INFO] Starting gunicorn 19.9.0 [2019-10-17 21:23:36 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1) [2019-10-17 21:23:36 +0000] [1] [INFO] Using worker: sync [2019-10-17 21:23:36 +0000] [7] [INFO] Booting worker with pid: 7 [2019-10-17 21:23:36 +0000] [8] [INFO] Booting worker with pid: 8 [2019-10-17 21:23:36 +0000] [9] [INFO] Booting worker with pid: 9

      Here, we run the default command defined in the Dockerfile, gunicorn --bind :8000 --workers 3 mysite.wsgi:application, and expose container port 8000 so that port 80 on the Ubuntu server gets mapped to port 8000 of the polls container.

      You should now be able to navigate to the polls app using your web browser by typing http://APP_SERVER_1_IP in the URL bar. Since there is no route defined for the / path, you’ll likely receive a 404 Page Not Found error, which is expected.

      Warning: When using the UFW firewall with Docker, Docker bypasses any configured UFW firewall rules, as documented in this GitHub issue. This explains why you have access to port 80 of your server, even though you haven’t explicitly created a UFW access rule in any prerequisite step. In Step 5 we will address this security hole by patching the UFW configuration. If you are not using UFW and are using DigitalOcean’s Cloud Firewalls, you can safely ignore this warning.

      Navigate to http://APP_SERVER_1_IP/polls to see the Polls app interface:

      Polls Apps Interface

      To view the administrative interface, visit http://APP_SERVER_1_IP/admin. You should see the Polls app admin authentication window:

      Polls Admin Auth Page

      Enter the administrative username and password you created with the createsuperuser command.

      After authenticating, you can access the Polls app’s administrative interface:

      Polls Admin Main Interface

      Note that static assets for the admin and polls apps are being delivered directly from object storage. To confirm this, consult Testing Spaces Static File Delivery.

      When you are finished exploring, hit CTRL+C in the terminal window running the Docker container to kill the container.

      Now that you’ve confirmed that the app container runs as expected, you can run it in detached mode, which will run it in the background and allow you to log out of your SSH session:

      docker run -d --rm --name polls --env-file env -p 80:8000 polls
      

      The -d flag instructs Docker to run the container in detached mode, the -rm flag cleans up the container’s filesystem after the container exits, and we name the container polls.

      Log out of the first Django app server, and navigate to http://APP_SERVER_1_IP/polls to confirm that the container is running as expected.

      Now that your first Django app server is up and running, you can set up your second Django app server.

      Step 2 — Configuring the Second Django Application Server

      Since many of the commands to set up this server will be the same as those in the previous step, they will be presented here in abbreviated form. Please review Step 1 for more information on any particular command in this step.

      Begin by logging in to the second Django application server.

      Clone the polls-docker branch of the django-polls GitHub repository:

      • git clone --single-branch --branch polls-docker https://github.com/do-community/django-polls.git

      Navigate into the django-polls directory:

      cd django-polls
      

      Build the image using docker build:

      Open the env file with nano or your favorite editor:

      nano env
      

      django-polls/env

      DJANGO_SECRET_KEY=
      DEBUG=True
      DJANGO_ALLOWED_HOSTS=
      DATABASE_ENGINE=postgresql_psycopg2
      DATABASE_NAME=polls
      DATABASE_USERNAME=
      DATABASE_PASSWORD=
      DATABASE_HOST=
      DATABASE_PORT=
      STATIC_ACCESS_KEY_ID=
      STATIC_SECRET_KEY=
      STATIC_BUCKET_NAME=
      STATIC_ENDPOINT_URL=
      DJANGO_LOGLEVEL=info
      

      Fill in the missing values as in Step 1. When you’ve finished editing, save and close the file.

      Finally, run the app container in detached mode:

      docker run -d --rm --name polls --env-file env -p 80:8000 polls
      

      Navigate to http://APP_SERVER_2_IP/polls to confirm that the container is running as expected. You can safely log out of the second app server without terminating your running container.

      With both Django app containers up and running, you can move on to configuring the Nginx reverse proxy container.

      Step 3 — Configuring the Nginx Docker Container

      Nginx is a versatile web server that offers a number of features including reverse proxying, load balancing, and caching. In this tutorial we’ve offloaded Django’s static assets to object storage, so we won’t use Nginx’s caching capabilities. However, we will use Nginx as a reverse proxy to our two backend Django app servers, and distribute incoming requests between them. In addition, Nginx will perform TLS termination and redirection using a TLS certificate provisioned by Certbot. This means that it will force clients to use HTTPS, redirecting incoming HTTP requests to port 443. It will then decrypt HTTPS requests and proxy them to the upstream Django servers.

      In this tutorial we’ve made the design decision to decouple the Nginx containers from the backend servers. Depending on your use case, you may choose to run the Nginx container on one of the Django app servers, proxying requests locally, as well as to the other Django server. Another possible architecture would be running two Nginx containers, one on each backend server, with a cloud load balancer in front. Each architecture presents different security and performance advantages, and you should load test your system to discover bottlenecks. The flexible architecture described in this tutorial allows you to scale both the backend Django app layer, as well as the Nginx proxying layer. Once the single Nginx container becomes a bottleneck, you can scale out to multiple Nginx proxies, and add a cloud load balancer or fast L4 load balancer like HAProxy.

      With both Django app servers up and running, we can begin setting up the Nginx proxy server. Log in to your proxy server and create a directory called conf:

      mkdir conf
      

      Create a configuration file called nginx.conf using nano or your favorite editor:

      nano conf/nginx.conf
      

      Paste in the following Nginx configuration:

      conf/nginx.conf

      
      upstream django {
          server APP_SERVER_1_IP;
          server APP_SERVER_2_IP;
      }
      
      server {
          listen 80 default_server;
          return 444;
      }
      
      server {
          listen 80;
          listen [::]:80;
          server_name your_domain.com;
          return 301 https://$server_name$request_uri;
      }
      
      server {
          listen 443 ssl http2;
          listen [::]:443 ssl http2;
          server_name your_domain.com;
      
          # SSL
          ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
          ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
      
          ssl_session_cache shared:le_nginx_SSL:10m;
          ssl_session_timeout 1440m;
          ssl_session_tickets off;
      
          ssl_protocols TLSv1.2 TLSv1.3;
          ssl_prefer_server_ciphers off;
      
          ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
      
          client_max_body_size 4G;
          keepalive_timeout 5;
      
              location / {
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_set_header Host $http_host;
                proxy_redirect off;
                proxy_pass http://django;
              }
      
          location ^~ /.well-known/acme-challenge/ {
              root /var/www/html;
          }
      
      }
      

      These upstream, server, and location blocks configure Nginx to redirect HTTP requests to HTTPS, and load balance them across the two Django app servers configured in Steps 1 and 2. To learn more about Nginx configuration file structure, please refer to this article on Understanding the Nginx Configuration File Structure and Configuration Contexts. Additionally, this article on Understanding Nginx Server and Location Block Selection Algorithms may be helpful.

      This configuration was assembled from sample configuration files provided by Gunicorn, Cerbot, and Nginx and is meant as a minimal Nginx configuration to get this architecture up and running. Tuning this Nginx configuration goes beyond the scope of this article, but you can use a tool like NGINXConfig to generate performant and secure Nginx configuration files for your architecture.

      The upstream block defines the group of servers used to proxy requests to using the proxy_pass directive:

      conf/nginx.conf

      upstream django {
          server APP_SERVER_1_IP;
          server APP_SERVER_2_IP;
      }
      . . .
      

      In this block we name the upstream django and include the IP addresses of both Django app servers. If the app servers are running on DigitalOcean and have VPC Networking enabled, you should use their private IP addresses here. To learn how to enable VPC Networking on DigitalOcean, please see How to Enable VPC Networking on Existing Droplets.

      The first server block captures requests that do not match your domain and terminates the connection. For example, a direct HTTP request to your server’s IP address would be handled by this block:

      conf/nginx.conf

      . . .
      server {
          listen 80 default_server;
          return 444;
      }
      . . .
      

      The next server block redirects HTTP requests to your domain to HTTPS using an HTTP 301 redirect. These requests are then handled by the final server block:

      conf/nginx.conf

      . . .
      server {
          listen 80;
          listen [::]:80;
          server_name your_domain.com;
          return 301 https://$server_name$request_uri;
      }
      . . .
      

      These two directives define the paths to the TLS certificate and secret key. These will be provisioned using Certbot and mounted into the Nginx container in the next step.

      conf/nginx.conf

      . . .
      ssl_certificate /etc/letsencrypt/live/your_domain.com/fullchain.pem;
      ssl_certificate_key /etc/letsencrypt/live/your_domain.com/privkey.pem;
      . . .
      

      These parameters are SSL security defaults recommended by Certbot. To learn more about them, please see Module ngx_http_ssl_module from the Nginx docs. Mozilla’s Security/Server Side TLS is another helpful guide that you can use to tune your SSL configuration.

      conf/nginx.conf

      . . .
          ssl_session_cache shared:le_nginx_SSL:10m;
          ssl_session_timeout 1440m;
          ssl_session_tickets off;
      
          ssl_protocols TLSv1.2 TLSv1.3;
          ssl_prefer_server_ciphers off;
      
          ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";
      . . .
      

      These two directives from Gunicorn’s sample Nginx configuration set the maximum allowed size of the client request body and assign the timeout for keep-alive connections with the client. Nginx will close connections with the client after keepalive_timeout seconds.

      conf/nginx.conf

      . . .
      client_max_body_size 4G;
      keepalive_timeout 5;
      . . .
      

      The first location block instructs Nginx to proxy requests to the upstream django servers over HTTP. It additionally preserves client HTTP headers that capture the originating IP address, protocol used to connect, and target host:

      conf/nginx.conf

      . . .
      location / {
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $scheme;
          proxy_set_header Host $http_host;
          proxy_redirect off;
          proxy_pass http://django;
      }
      . . .
      

      To learn more about these directives, please see Deploying Gunicorn and Module ngx_http_proxy_module from the Nginx docs.

      The final location block captures requests to the /well-known/acme-challenge/ path, used by Certbot for HTTP-01 challenges to verify your domain with Let’s Encrypt and provision or renew TLS certificates. For more information on the HTTP-01 challenge used by Certbot, please see Challenge Types from the Let’s Encrypt docs.

      conf/nginx.conf

      . . .
      location ^~ /.well-known/acme-challenge/ {
              root /var/www/html;
      }
      

      Once you’ve finished editing, save and close the file.

      You can now use this configuration file to run an Nginx Docker container. In this tutorial we’ll use the nginx:1.19.0 image, version 1.19.0 of the official Docker image maintained by Nginx.

      When we run the container for the first time, Nginx will throw an error and fail as we haven’t yet provisioned the certificates defined in the configuration file. However, we’ll still run the command to download the Nginx image locally and test that everything else is functioning correctly:

      docker run --rm --name nginx -p 80:80 -p 443:443 
          -v ~/conf/nginx.conf:/etc/nginx/conf.d/nginx.conf:ro 
          -v /var/www/html:/var/www/html 
          nginx:1.19.0
      

      Here we name the container nginx and map the host ports 80 and 443 to the respective container ports. The -v flag mounts the config file into the Nginx container at /etc/nginx/conf.d/nginx.conf, which the Nginx image is preconfigured to load. It is mounted in ro or “read only” mode, so the container cannot modify the file. The web root directory /var/www/html is also mounted into the container. Finally nginx:1.19.0 instructs Docker to pull and run the nginx:1.19.0 image from Dockerhub.

      Docker will pull and run the image, then Nginx will throw an error when it doesn’t find the configured TLS certificate and secret key. In the next step we’ll provision these using a Dockerized Certbot client and the Let’s Encrypt certificate authority.

      Step 4 — Configuring Certbot and Let’s Encrypt Certificate Renewal

      Certbot is a Let’s Encrypt client developed by the Electronic Frontier Foundation. It provisions free TLS certificates from the Let’s Encrypt certificate authority which allow browsers to verify the identity of your web servers. Given that we have Docker installed on our Nginx proxy server, we’ll use the Certbot Docker image to provision and renew the TLS certificates.

      Begin by ensuring that you have a DNS A record mapped to the proxy server’s public IP address. Then, on your proxy server, provision a staging version of the certificates using the certbot Docker image:

      docker run -it --rm -p 80:80 --name certbot 
               -v "/etc/letsencrypt:/etc/letsencrypt" 
               -v "/var/lib/letsencrypt:/var/lib/letsencrypt" 
               certbot/certbot certonly --standalone --staging -d your_domain.com
      

      This command runs the certbot Docker image in interactive mode, and forwards port 80 on the host to container port 80. It creates and mounts two host directories into the container: /etc/letsencrypt/ and /var/lib/letsencrypt/. certbot is run in standalone mode, without Nginx, and will use the Let’s Encrypt staging servers to perform domain validation.

      When prompted, enter your email address and agree to the Terms of Service. If domain validation was successful, you should see the following output:

      Output

      Obtaining a new certificate Performing the following challenges: http-01 challenge for stubb.dev Waiting for verification... Cleaning up challenges IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/your_domain.com/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/your_domain.com/privkey.pem Your cert will expire on 2020-09-15. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal.

      You can inspect the certificate using cat:

      sudo cat /etc/letsencrypt/live/your_domain.com/fullchain.pem
      

      With the TLS certificate provisioned, we can test the Nginx configuration assembled in the previous step:

      docker run --rm --name nginx -p 80:80 -p 443:443 
          -v ~/conf/nginx.conf:/etc/nginx/conf.d/nginx.conf:ro 
          -v /etc/letsencrypt:/etc/letsencrypt 
          -v /var/lib/letsencrypt:/var/lib/letsencrypt 
          -v /var/www/html:/var/www/html 
          nginx:1.19.0
      

      This is the same command run in Step 3, with the addition of both recently created Let’s Encrypt directories.

      Once Nginx is up and running, navigate to http://your_domain.com. You may receive a warning in your browser that the certificate authority is invalid. This is expected as we’ve provisioned staging certificates and not production Let’s Encrypt certificates. Check the URL bar of your browser to confirm that your HTTP request was redirected to HTTPS.

      Hit CTRL+C in your terminal to quit Nginx, and run the certbot client again, this time omitting the --staging flag:

      docker run -it --rm -p 80:80 --name certbot 
               -v "/etc/letsencrypt:/etc/letsencrypt" 
               -v "/var/lib/letsencrypt:/var/lib/letsencrypt" 
               certbot/certbot certonly --standalone -d your_domain.com
      

      When prompted to either keep the existing certificate or renew and replace it, hit 2 to renew it and then ENTER to confirm your choice.

      With the production TLS certificate provisioned, run the Nginx server once again:

      docker run --rm --name nginx -p 80:80 -p 443:443 
          -v ~/conf/nginx.conf:/etc/nginx/conf.d/nginx.conf:ro 
          -v /etc/letsencrypt:/etc/letsencrypt 
          -v /var/lib/letsencrypt:/var/lib/letsencrypt 
          -v /var/www/html:/var/www/html 
          nginx:1.19.0
      

      In your browser, navigate to http://your_domain.com. In the URL bar, confirm that the HTTP request has been redirected to HTTPS. Given that the Polls app has no default route configured, you should see a Django Page not found error. Navigate to https://your_domain.com/polls and you’ll see the standard Polls app interface:

      Polls Apps Interface

      At this point you’ve provisioned a production TLS certificate using the Certbot Docker client, and are reverse proxying and load balancing external requests to the two Django app servers.

      Let’s Encrypt certificates expire every 90 days. To ensure that your certificate remains valid, you should renew it regularly before its scheduled expiry. With Nginx running, you should use the Certbot client in webroot mode instead of standalone mode. This means that Certbot will perform validation by creating a file in the /var/www/html/.well-known/acme-challenge/ directory, and the Let’s Encrypt validation requests to this path will be captured by the location rule defined in the Nginx config in Step 3. Certbot will then rotate certificates, and you can reload Nginx so that it uses this newly provisioned certificate.

      There are multiple ways to automate this procedure and the automatic renewal of TLS certificates goes beyond the scope of this tutorial. For a similar process using the cron scheduling utility, please see Step 6 of How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose.

      In your terminal, hit CTRL+C to kill the Nginx container. Run it again in detached mode by appending the -d flag:

      docker run --rm --name nginx -d -p 80:80 -p 443:443 
          -v ~/conf/nginx.conf:/etc/nginx/conf.d/nginx.conf:ro 
          -v /etc/letsencrypt:/etc/letsencrypt 
          -v /var/lib/letsencrypt:/var/lib/letsencrypt 
        -v /var/www/html:/var/www/html 
          nginx:1.19.0
      

      With Nginx running in the background, use the following command to perform a dry run of the certificate renewal procedure:

      docker run -it --rm --name certbot 
          -v "/etc/letsencrypt:/etc/letsencrypt" 
        -v "/var/lib/letsencrypt:/var/lib/letsencrypt" 
        -v "/var/www/html:/var/www/html" 
        certbot/certbot renew --webroot -w /var/www/html --dry-run
      

      We use the --webroot plugin, specify the web root path, and use the --dry-run flag to verify that everything is working correctly without actually performing the certificate renewal.

      If the renewal simulation succeeds, you should see the following output:

      Output

      Cert not due for renewal, but simulating renewal for dry run Plugins selected: Authenticator webroot, Installer None Renewing an existing certificate Performing the following challenges: http-01 challenge for your_domain.com Using the webroot path /var/www/html for all unmatched domains. Waiting for verification... Cleaning up challenges - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - new certificate deployed without reload, fullchain is /etc/letsencrypt/live/your_domain.com/fullchain.pem - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/your_domain.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      In a production setting, after renewing certificates, you should reload Nginx so that the changes take effect. To reload Nginx, run the following command:

      docker kill -s HUP nginx
      

      This command will send a HUP Unix signal to the Nginx process running inside of the nginx Docker container. Upon receiving this signal, Nginx will reload its configuration and renewed certificates.

      With HTTPS enabled and all the components of this architecture up and running, the final step is to lock down the setup by preventing external access to the two backend app servers; all HTTP requests should flow through the Nginx proxy.

      Step 5 — Preventing External Access to Django App Servers

      In the architecture described in this tutorial, SSL termination occurs at the Nginx proxy. This means that Nginx decrypts the SSL connection, and packets are proxied to the Django app servers unencrypted. For many use cases, this level of security is sufficient. For applications involving financial or health data, you may want to implement end-to-end encryption. You can do this by forwarding encrypted packets through the load balancer and decrypting on the app servers, or re encrypting at the proxy and once again decrypting on the Django app servers. These techniques go beyond the scope of this article, but to learn more please consult End-to-end encryption.

      The Nginx proxy acts as a gateway between external traffic and the internal network. Theoretically no external clients should have direct access to the internal app servers, and all requests should flow through the Nginx server. The note in Step 1 briefly describes an open issue with Docker where Docker bypasses ufw firewall settings by default and opens ports externally, which may be insecure. To address this security concern, it’s recommended to use cloud firewalls when working with Docker-enabled servers. To get more information on creating Cloud Firewalls with DigitalOcean, consult How to Create Firewalls. You can also manipulate iptables directly instead of using ufw. To learn more about using iptables with Docker, please see Docker and iptables.

      In this step we’ll modify UFW’s configuration to block external access to host ports opened by Docker. When running Django on the app servers, we passed the -p 80:8000 flag to docker, which forwards port 80 on the host to container port 8000. This also opened up port 80 to external clients, which you can verify by visiting http://your_app_server_1_IP. To prevent direct access, we’ll modify UFW’s configuration using the method described in the ufw-docker GitHub repository.

      Begin by logging in to the first Django app server. Then, open the /etc/ufw/after.rules file with superuser privileges, using nano or your favorite editor:

      sudo nano /etc/ufw/after.rules
      

      Enter your password when prompted, and hit ENTER to confirm.

      You should see the following ufw rules:

      /etc/ufw/after.rules

      #
      # rules.input-after
      #
      # Rules that should be run after the ufw command line added rules. Custom
      # rules should be added to one of these chains:
      #   ufw-after-input
      #   ufw-after-output
      #   ufw-after-forward
      #
      
      # Don't delete these required lines, otherwise there will be errors
      *filter
      :ufw-after-input - [0:0]
      :ufw-after-output - [0:0]
      :ufw-after-forward - [0:0]
      # End required lines
      
      # don't log noisy services by default
      -A ufw-after-input -p udp --dport 137 -j ufw-skip-to-policy-input
      -A ufw-after-input -p udp --dport 138 -j ufw-skip-to-policy-input
      -A ufw-after-input -p tcp --dport 139 -j ufw-skip-to-policy-input
      -A ufw-after-input -p tcp --dport 445 -j ufw-skip-to-policy-input
      -A ufw-after-input -p udp --dport 67 -j ufw-skip-to-policy-input
      -A ufw-after-input -p udp --dport 68 -j ufw-skip-to-policy-input
      
      # don't log noisy broadcast
      -A ufw-after-input -m addrtype --dst-type BROADCAST -j ufw-skip-to-policy-input
      
      # don't delete the 'COMMIT' line or these rules won't be processed
      COMMIT
      

      Scroll to the bottom, and paste in the following block of UFW configuration rules:

      /etc/ufw/after.rules

      . . .
      
      # BEGIN UFW AND DOCKER
      *filter
      :ufw-user-forward - [0:0]
      :DOCKER-USER - [0:0]
      -A DOCKER-USER -j RETURN -s 10.0.0.0/8
      -A DOCKER-USER -j RETURN -s 172.16.0.0/12
      -A DOCKER-USER -j RETURN -s 192.168.0.0/16
      
      -A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN
      
      -A DOCKER-USER -j ufw-user-forward
      
      -A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
      -A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
      -A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
      -A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
      -A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
      -A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 172.16.0.0/12
      
      -A DOCKER-USER -j RETURN
      COMMIT
      # END UFW AND DOCKER
      

      These rules restrict public access to ports opened by Docker, and enable access from the 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 private IP ranges. If you are using VPC with DigitalOcean, then Droplets in your VPC network will have access to the open port over the private network interface, but external clients will not. For more information about VPC, please see the VPC official documentation. To learn more about the rules implemented in this snippet, please see How it works? from the ufw-docker README.

      When you’ve finished editing, save and close the file.

      Restart ufw so that it picks up the new configuration:

      sudo systemctl restart ufw
      

      Navigate to http://APP_SERVER_1_IP in your web browser to confirm that you can no longer access the app server over port 80.

      Repeat this process on the second Django app server.

      Log out of the first app server or open another terminal window, and log in to the second Django app server. Then, open the /etc/ufw/after.rules file with superuser privileges, using nano or your favorite editor:

      sudo nano /etc/ufw/after.rules
      

      Enter your password when prompted, and hit ENTER to confirm.

      Scroll to the bottom, and paste in the following block of UFW configuration rules:

      /etc/ufw/after.rules

      . . .
      
      # BEGIN UFW AND DOCKER
      *filter
      :ufw-user-forward - [0:0]
      :DOCKER-USER - [0:0]
      -A DOCKER-USER -j RETURN -s 10.0.0.0/8
      -A DOCKER-USER -j RETURN -s 172.16.0.0/12
      -A DOCKER-USER -j RETURN -s 192.168.0.0/16
      
      -A DOCKER-USER -p udp -m udp --sport 53 --dport 1024:65535 -j RETURN
      
      -A DOCKER-USER -j ufw-user-forward
      
      -A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 192.168.0.0/16
      -A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 10.0.0.0/8
      -A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d 172.16.0.0/12
      -A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 192.168.0.0/16
      -A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 10.0.0.0/8
      -A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d 172.16.0.0/12
      
      -A DOCKER-USER -j RETURN
      COMMIT
      # END UFW AND DOCKER
      

      When you’ve finished editing, save and close the file.

      Restart ufw so that it picks up the new configuration:

      sudo systemctl restart ufw
      

      Navigate to http://APP_SERVER_2_IP in your web browser to confirm that you can no longer access the app server over port 80.

      Finally, navigate to https://your_domain_here/polls to confirm that the Nginx proxy still has access to the upstream Django servers. You should see the default Polls app interface.

      Conclusion

      In this tutorial, you’ve set up a scalable Django Polls application using Docker containers. As your traffic grows and load on the system increases, you can scale each layer separately: the Nginx proxying layer, the Django backend app layer, and the PostgreSQL database layer.

      When building a distributed system, there are often multiple design decisions you must face, and several architectures may satisfy your use case. The architecture described in this tutorial is meant as a flexible blueprint for designing scalable apps with Django and Docker.

      You may wish to control the behavior of your containers when they encounter errors, or run containers automatically when your system boots. To do this, you can use a process manager like Systemd or implement restart policies. For more information about these, please see Start containers automatically from the Docker documentation.

      When working at scale with multiple hosts running the same Docker image, it can be more efficient to automate steps using a configuration management tool like Ansible or Chef. To learn more about configuration management, please consult An Introduction to Configuration Management and Automating Server Setup with Ansible: A DigitalOcean Workshop Kit.

      Instead of building the same image on every host, you can also streamline deployment using an image registry like Docker Hub, which centrally builds, stores, and distributes Docker images to multiple servers. Along with an image registry, a continuous integration and deployment pipeline can help you build, test, and deploy images to your app servers. For more information on CI/CD, please consult An Introduction to CI/CD Best Practices.



      Source link