One place for hosting & domains

      How To Migrate a MySQL Database to PostgreSQL Using pgLoader


      Introduction

      PostgreSQL, also known as “Postgres,” is an open-source relational database management system (RDBMS). It has seen a drastic growth in popularity in recent years, with many developers and companies migrating their data to Postgres from other database solutions.

      The prospect of migrating a database can be intimidating, especially when migrating from one database management system to another. pgLoader is an open-source database migration tool that aims to simplify the process of migrating to PostgreSQL. It supports migrations from several file types and RBDMSs — including MySQL and SQLite — to PostgreSQL.

      This tutorial provides instructions on how to install pgLoader and use it to migrate a remote MySQL database to PostgreSQL over an SSL connection. Near the end of the tutorial, we will also briefly touch on a few different migration scenarios where pgLoader may be useful.

      Prerequisites

      To complete this tutorial, you’ll need the following:

      • Access to two servers, each running Ubuntu 18.04. Both servers should have a firewall and a non-root user with sudo privileges configured. To set these up, you can follow our Initial Server Setup guide for Ubuntu 18.04.
      • MySQL installed on one of the servers. To set this up, follow Steps 1, 2, and 3 of our guide on How To Install MySQL on Ubuntu 18.04. Please note that in order to complete all the prerequisite tutorials linked here, you will need to configure your root MySQL user to authenticate with a password, as described in Step 3 of the MySQL installation guide.
      • PostgreSQL installed on the other server. To set this up, complete Step 1 of our guide How To Install and Use PostgreSQL on Ubuntu 18.04.
      • Your MySQL server should also be configured to accept encrypted connections. To set this up, complete every step of our tutorial on How To Configure SSL/TLS for MySQL on Ubuntu 18.04, including the optional Step 6. As you follow this guide, be sure to use your PostgreSQL server as the MySQL client machine, as you will need to be able to connect to your MySQL server from your Postgres machine in order to migrate the data with pgLoader.

      Please note that throughout this guide, the server on which you installed MySQL will be referred to as the “MySQL server” and any commands that should be run on this machine will be shown with a blue background, like this:

      Similarly, this guide will refer to the other server as the "PostgreSQL" or "Postgres" server and any commands that must be run on that machine will be shown with a red background:

      Please keep these in mind as you follow this tutorial so as to avoid any confusion.

      Step 1 — (Optional) Creating a Sample Database and Table in MySQL

      This step describes the process of creating a test database and populating it with dummy data. We encourage you to practice using pgLoader with this test case, but if you already have a database you want to migrate, you can move on to the next step.

      Start by opening up the MySQL prompt on your MySQL server:

      After entering your root MySQL user's password, you will see the MySQL prompt.

      From there, create a new database by running the following command. You can name your database whatever you'd like, but in this guide we will name it source_db:

      • CREATE DATABASE source_db;

      Then switch to this database with the USE command:

      Output

      Database changed

      Within this database, use the following command to create a sample table. Here, we will name this table sample_table but feel free to give it another name:

      • CREATE TABLE sample_table (
      • employee_id INT PRIMARY KEY,
      • first_name VARCHAR(50),
      • last_name VARCHAR(50),
      • start_date DATE,
      • salary VARCHAR(50)
      • );

      Then populate this table with some sample employee data using the following command:

      • INSERT INTO sample_table (employee_id, first_name, last_name, start_date, salary)
      • VALUES (1, 'Elizabeth', 'Cotten', '2007-11-11', '$105433.18'),
      • (2, 'Yanka', 'Dyagileva', '2017-10-30', '$107540.67'),
      • (3, 'Lee', 'Dorsey', '2013-06-04', '$118024.04'),
      • (4, 'Kasey', 'Chambers', '2010-08-18', '$116456.98'),
      • (5, 'Bram', 'Tchaikovsky', '2018-09-16', '$61989.50');

      Following this, you can close the MySQL prompt:

      Now that you have a sample database loaded with dummy data, you can move on to the next step in which you will install pgLoader on your PostgreSQL server.

      Step 2 — Installing pgLoader

      pgLoader is a program that can load data into a PostgreSQL database from a variety of different sources. It uses PostgreSQL's COPY command to copy data from a source database or file — such as a comma-separated values (CSV) file — into a target PostgreSQL database.

      pgLoader is available from the default Ubuntu APT repositories and you can install it using the apt command. However, in this guide we will take advantage of pgLoader's useSSL option, a feature that allows for migrations from MySQL over an SSL connection. This feature is only available in the latest version of pgLoader which, as of this writing, can only be installed using the source code from its GitHub repository.

      Before installing pgLoader, you will need to install its dependencies. If you haven't done so recently, update your Postgres server's package index:

      Then install the following packages:

      • sbcl: A Common Lisp compiler
      • unzip: A de-archiver for .zip files
      • libsqlite3-dev: A collection of development files for SQLite 3
      • gawk: Short for "GNU awk", a pattern scanning and processing language
      • curl: A command line tool for transferring data from a URL
      • make: A utility for managing package compilation
      • freetds-dev: A client library for MS SQL and Sybase databases
      • libzip-dev: A library for reading, creating, and modifying zip archives

      Use the following command to install these dependencies:

      • sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev libzip-dev

      When prompted, confirm that you want to install these packages by pressing ENTER.

      Next, navigate to the pgLoader GitHub project's Releases page and find the latest release. For this guide, we will use the latest release at the time of this writing: version 3.6.1. Scroll down to its Assets menu and copy the link for the tar.gz file labeled Source code. Then paste the link into the following wget command. This will download the tarball to your server:

      • wget https://github.com/dimitri/pgloader/archive/v3.6.1.tar.gz

      Extract the tarball:

      This will create a number of new directories and files on your server. Navigate into the new pgLoader parent directory:

      Then use the make utility to compile the pgloader binary:

      This command will take some time to build the pgloader binary.

      Move the binary file into the /usr/local/bin directory, the location where Ubuntu looks for executable files:

      • sudo mv ./build/bin/pgloader /usr/local/bin/

      You can test that pgLoader was installed correctly by checking its version, like so:

      Output

      pgloader version "3.6.1" compiled with SBCL 1.4.5.debian

      pgLoader is now installed, but before you can begin your migration you'll need to make some configuration changes to both your PostgreSQL and MySQL instances. We'll focus on the PostgreSQL server first.

      Step 3 — Creating a PostgreSQL Role and Database

      The pgloader command works by copying source data, either from a file or directly from a database, and inserting it into a PostgreSQL database. For this reason, you must either run pgLoader as a Linux user who has access to your Postgres database or you must specify a PostgreSQL role with the appropriate permissions in your load command.

      PostgreSQL manages database access through the use of roles. Depending on how the role is configured, it can be thought of as either a database user or a group of database users. In most RDBMSs, you create a user with the CREATE USER SQL command. Postgres, however, comes installed with a handy script called createuser. This script serves as a wrapper for the CREATE USER SQL command that you can run directly from the command line.

      Note: In PostgreSQL, you authenticate as a database user using the Identification Protocol, or ident, authentication method by default, rather than with a password. This involves PostgreSQL taking the client's Ubuntu username and using it as the allowed Postgres database username. This allows for greater security in many cases, but it can also cause issues in instances where you'd like an outside program to connect to one of your databases.

      pgLoader can load data into a Postgres database through a role that authenticates with the ident method as long as that role shares the same name as the Linux user profile issuing the pgloader command. However, to keep this process as clear as possible, this tutorial describes setting up a different PostgreSQL role that authenticates with a password rather than with the ident method.

      Run the following command on your Postgres server to create a new role. Note the -P flag, which tells createuser to prompt you to enter a password for the new role:

      • sudo -u postgres createuser --interactive -P

      You may first be prompted for your sudo password. The script will then prompt you to enter a name for the new role. In this guide, we'll call this role pgloader_pg:

      Output

      Enter name of role to add: pgloader_pg

      Following that, createuser will prompt you to enter and confirm a password for this role. Be sure to take note of this password, as you'll need it to perform the migration in Step 5:

      Output

      Enter password for new role: Enter it again:

      Lastly, the script will ask you if the new role should be classified as a superuser. In PostgreSQL, connecting to the database with a superuser role allows you to circumvent all of the database's permissions checks, except for the right to log in. Because of this, the superuser privilege should not be used lightly, and the PostgreSQL documentation recommends that you do most of your database work as a non-superuser role. However, because pgLoader needs broad privileges to access and load data into tables, you can safely grant this new role superuser privileges. Do so by typing y and then pressing ENTER:

      Output

      . . . Shall the new role be a superuser? (y/n) y

      PostgreSQL comes with another useful script that allows you to create a database from the command line. Since pgLoader also needs a target database into which it can load the source data, run the following command to create one. We'll name this database new_db but feel free to modify that if you like:

      • sudo -u postgres createdb new_db

      If there aren't any errors, this command will complete without any output.

      Now that you have a dedicated PostgreSQL user and an empty database into which you can load your MySQL data, there are just a few more changes you'll need to make before performing a migration. You'll need to create a dedicated MySQL user with access to your source database and add your client-side certificates to Ubuntu's trusted certificate store.

      Step 4 — Creating a Dedicated User in MySQL and Managing Certificates

      Protecting data from snoopers is one of the most important parts of any database administrator's job. Migrating data from one machine to another opens up an opportunity for malicious actors to sniff the packets traveling over the network connection if it isn't encrypted. In this step, you will create a dedicated MySQL user which pgLoader will use to perform the migration over an SSL connection.

      Begin by opening up your MySQL prompt:

      From the MySQL prompt, use the following CREATE USER command to create a new MySQL user. We will name this user pgloader_my. Because this user will only access MySQL from your PostgreSQL server, be sure to replace your_postgres_server_ip with the public IP address of your PostgreSQL server. Additionally, replace password with a secure password or passphrase:

      • CREATE USER 'pgloader_my'@'your_postgres_server_ip' IDENTIFIED BY 'password' REQUIRE SSL;

      Note the REQUIRE SSL clause at the end of this command. This will restrict the pgloader_my user to only access the database through a secure SSL connection.

      Next, grant the pgloader_my user access to the target database and all of its tables. Here, we'll specify the database we created in the optional Step 1, but if you have your own database you'd like to migrate, use its name in place of source_db:

      • GRANT ALL ON source_db.* TO 'pgloader_my'@'your_postgresql_server_ip';

      Then run the FLUSH PRIVILEGES command to reload the grant tables, enabling the privilege changes:

      After this, you can close the MySQL prompt:

      Now go back to your Postgres server terminal and attempt to log in to the MySQL server as the new pgloader_my user. If you followed the prerequisite guide on configuring SSL/TLS for MySQL then you will already have mysql-client installed on your PostgreSQL server and you should be able to connect with the following command:

      • mysql -u pgloader_my -p -h your_mysql_server_ip

      If the command is successful, you will see the MySQL prompt:

      After confirming that your pgloader_my user can successfully connect, go ahead and close the prompt:

      At this point, you have a dedicated MySQL user that can access the source database from your Postgres machine. However, if you were to try to migrate your MySQL database using SSL the attempt would fail.

      The reason for this is that pgLoader isn't able to read MySQL's configuration files, and thus doesn't know where to look for the CA certificate or client certificate that you copied to your PostgreSQL server in the prerequisite SSL/TLS configuration guide. Rather than ignoring SSL requirements, though, pgLoader requires the use of trusted certificates in cases where SSL is needed to connect to MySQL. Accordingly, you can resolve this issue by adding the ca.pem and client-cert.pem files to Ubuntu's trusted certificate store.

      To do this, copy over the ca.pem and client-cert.pem files to the /usr/local/share/ca-certificates/ directory. Note that you must also rename these files so they have the .crt file extension. If you don't rename them, your system will not be able to recognize that you've added these new certificates:

      • sudo cp ~/client-ssl/ca.pem /usr/local/share/ca-certificates/ca.pem.crt
      • sudo cp ~/client-ssl/client-cert.pem /usr/local/share/ca-certificates/client-cert.pem.crt

      Following this, run the update-ca-certificates command. This program looks for certificates within /usr/local/share/ca-certificates, adds any new ones to the /etc/ssl/certs/ directory, and generates a list of trusted SSL certificates — ca-certificates.crt — based on the contents of the /etc/ssl/certs/ directory:

      • sudo update-ca-certificates

      Output

      Updating certificates in /etc/ssl/certs... 2 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... done.

      With that, you're all set to migrate your MySQL database to PostgreSQL.

      Step 5 — Migrating the Data

      Now that you've configured remote access from your PostgreSQL server to your MySQL server, you're ready to begin the migration.

      Note: It's important to back up your database before taking any action that could impact the integrity of your data. However, this isn't necessary when performing a migration with pgLoader, since it doesn't delete or transform data; it only copies it.

      That said, if you're feeling cautious and would like to back up your data before migrating it, you can do so with the mysqldump utility. See the official MySQL documentation for details.

      pgLoader allows users to migrate an entire database with a single command. For a migration from a MySQL database to a PostgreSQL database on a separate server, the command would have the following syntax:

      • pgloader mysql://mysql_username:password@mysql_server_ip_/source_database_name?option_1=value&option_n=value postgresql://postgresql_role_name:password@postgresql_server_ip/target_database_name?option_1=value&option_n=value

      This includes the pgloader command and two connection strings, the first for the source database and the second for the target database. Both of these connection strings begin by declaring what type of DBMS the connection string points to, followed by the username and password that have access to the database (separated by a colon), the host address of the server where the database is installed, the name of the database pgLoader should target, and various options that affect pgLoader's behavior.

      Using the parameters defined earlier in this tutorial, you can migrate your MySQL database using a command with the following structure. Be sure to replace any highlighted values to align with your own setup:

      • pgloader mysql://pgloader_my:mysql_password@mysql_server_ip/source_db?useSSL=true postgresql://pgloader_pg:postgresql_password@localhost/new_db

      Note that this command includes the useSSL option in the MySQL connection string. By setting this option to true, pgLoader will connect to MySQL over SSL. This is necessary, as you've configured your MySQL server to only accept secure connections.

      If this command is successful, you will see an output table describing how the migration went:

      Output

      table name errors rows bytes total time ----------------------- --------- --------- --------- -------------- fetch meta data 0 2 0.111s Create Schemas 0 0 0.001s Create SQL Types 0 0 0.005s Create tables 0 2 0.017s Set Table OIDs 0 1 0.010s ----------------------- --------- --------- --------- -------------- source_db.sample_table 0 5 0.2 kB 0.048s ----------------------- --------- --------- --------- -------------- COPY Threads Completion 0 4 0.052s Index Build Completion 0 1 0.011s Create Indexes 0 1 0.006s Reset Sequences 0 0 0.014s Primary Keys 0 1 0.001s Create Foreign Keys 0 0 0.000s Create Triggers 0 0 0.000s Install Comments 0 0 0.000s ----------------------- --------- --------- --------- -------------- Total import time ✓ 5 0.2 kB 0.084s

      To check that the data was migrated correctly, open up the PostgreSQL prompt:

      From there, connect to the database into which you loaded the data:

      Then run the following query to test whether the migrated data is stored in your PostgreSQL database:

      • SELECT * FROM source_db.sample_table;

      Note: Notice the FROM clause in this query specifying the sample_table held within the source_db schema:

      • . . . FROM source_db.sample_table;

      This is called a qualified name. You could go further and specify the fully qualified name by including the database's name as well as those of the schema and table:

      • . . . FROM new_db.source_db.sample_table;

      When you run queries in a PostgreSQL database, you don't need to be this specific if the table is held within the default public schema. The reason you must do so here is that when pgLoader loads data into Postgres, it creates and targets a new schema named after the original database — in this case, source_db. This is pgLoader's default behavior for MySQL to PostgreSQL migrations. However, you can use a load file to instruct pgLoader to change the table's schema topubliconce it's done loading data. See the next step for an example of how to do this.

      If the data was indeed loaded correctly, you will see the following table in the query's output:

      Output

      employee_id | first_name | last_name | start_date | salary -------------+------------+-------------+------------+------------ 1 | Elizabeth | Cotten | 2007-11-11 | $105433.18 2 | Yanka | Dyagileva | 2017-10-30 | $107540.67 3 | Lee | Dorsey | 2013-06-04 | $118024.04 4 | Kasey | Chambers | 2010-08-18 | $116456.98 5 | Bram | Tchaikovsky | 2018-09-16 | $61989.50 (5 rows)

      To close the Postgres prompt, run the following command:

      Now that we've gone over how to migrate a MySQL database over a network and load it into a PostgreSQL database, we will go over a few other common migration scenarios in which pgLoader can be useful.

      Step 6 — Exploring Other Migration Options

      pgLoader is a highly flexible tool that can be useful in a wide variety of situations. Here, we'll take a quick look at a few other ways you can use pgLoader to migrate a MySQL database to PostgreSQL.

      Migrating with a pgLoader Load File

      In the context of pgLoader, a load file, or command file, is a file that tells pgLoader how to perform a migration. This file can include commands and options that affect pgLoader's behavior, giving you much finer control over how your data is loaded into PostgreSQL and allowing you to perform complex migrations.

      pgLoader's documentation provides comprehensive instructions on how to use and extend these files to support a number of migration types, so here we will work through a comparatively rudimentary example. We will perform the same migration we ran in Step 5, but will also include an ALTER SCHEMA command to change the new_db database's schema from source_db to public.

      To begin, create a new load file on the Postgres server using your preferred text editor:

      Then add the following content, making sure to update the highlighted values to align with your own configuration:

      pgload_test.load

      LOAD DATABASE
           FROM      mysql://pgloader_my:mysql_password@mysql_server_ip/source_db?useSSL=true
           INTO pgsql://pgloader_pg:postgresql_password@localhost/new_db
      
       WITH include drop, create tables
      
      ALTER SCHEMA 'source_db' RENAME TO 'public'
      ;
      

      Here is what each of these clauses do:

      • LOAD DATABASE: This line instructs pgLoader to load data from a separate database, rather than a file or data archive.
      • FROM: This clause specifies the source database. In this case, it points to the connection string for the MySQL database we created in Step 1.
      • INTO: Likewise, this line specifies the PostgreSQL database in to which pgLoader should load the data.
      • WITH: This clause allows you to define specific behaviors for pgLoader. You can find the full list of WITH options that are compatible with MySQL migrations here. In this example we only include two options:
        • include drop: When this option is used, pgLoader will drop any tables in the target PostgreSQL database that also appear in the source MySQL database. If you use this option when migrating data to an existing PostgreSQL database, you should back up the entire database to avoid losing any data.
        • create tables: This option tells pgLoader to create new tables in the target PostgreSQL database based on the metadata held in the MySQL database. If the opposite option, create no tables, is used, then the target tables must already exist in the target Postgres database prior to the migration.
      • ALTER SCHEMA: Following the WITH clause, you can add specific SQL commands like this to instruct pgLoader to perform additional actions. Here, we instruct pgLoader to change the new Postgres database's schema from source_db to public, but only after it has created the schema. Note that you can also nest such commands within other clauses — such as BEFORE LOAD DO — to instruct pgLoader to execute those commands at specific points in the migration process.

      This is a demonstrative example of what you can include in a load file to modify pgLoader's behavior. The complete list of clauses that one can add to a load file and what they do can be found in the official pgLoader documentation.

      Save and close the load file after you've finished adding this content. To use it, include the name of the file as an argument to the pgloader command:

      • pgloader pgload_test.load

      To test that the migration was successful, open up the Postgres prompt:

      Then connect to the database:

      And run the following query:

      • SELECT * FROM sample_table;

      Output

      employee_id | first_name | last_name | start_date | salary -------------+------------+-------------+------------+------------ 1 | Elizabeth | Cotten | 2007-11-11 | $105433.18 2 | Yanka | Dyagileva | 2017-10-30 | $107540.67 3 | Lee | Dorsey | 2013-06-04 | $118024.04 4 | Kasey | Chambers | 2010-08-18 | $116456.98 5 | Bram | Tchaikovsky | 2018-09-16 | $61989.50 (5 rows)

      This output confirms that pgLoader migrated the data successfully, and also that the ALTER SCHEMA command we added to the load file worked as expected, since we didn't need to specify the source_db schema in the query to view the data.

      Note that if you plan to use a load file to migrate data held on one database to another located on a separate machine, you will still need to adjust any relevant networking and firewall rules in order for the migration to be successful.

      Migrating a MySQL Database to PostgreSQL Locally

      You can use pgLoader to migrate a MySQL database to a PostgreSQL database housed on the same machine. All you need is to run the migration command from a Linux user profile with access to the root MySQL user:

      • pgloader mysql://root@localhost/source_db pgsql://sammy:postgresql_password@localhost/target_db

      Performing a local migration like this means you don't have to make any changes to MySQL's default networking configuration or your system's firewall rules.

      Migrating from a CSV file

      You can also load a PostgreSQL database with data from a CSV file.

      Assuming you have a CSV file of data named load.csv, the command to load it into a Postgres database might look like this:

      • pgloader load.csv pgsql://sammy:password@localhost/target_db

      Because the CSV format is not fully standardized, there's a chance that you will run into issues when loading data directly from a CSV file in this manner. Fortunately, you can correct for irregularities by including various options with pgLoader's command line options or by specifying them in a load file. See the pgLoader documentation on the subject for more details.

      Migrating to a Managed PostgreSQL Database

      It's also possible to perform a migration from a self-managed database to a managed PostgreSQL database. To illustrate how this kind of migration could look, we will use the MySQL server and a DigitalOcean Managed PostgreSQL Database. We'll also use the sample database we created in Step 1, but if you skipped that step and have your own database you'd like to migrate, you can point to that one instead.

      Note: For instructions on how to set up a DigitalOcean Managed Database, please refer to our Managed Database Quickstart guide.

      For this migration, we won't need pgLoader’s useSSL option since it only works with remote MySQL databases and we will run this migration from a local MySQL database. However, we will use the sslmode=require option when we load and connect to the DigitalOcean Managed PostgreSQL database, which will ensure your data stays protected.

      Because we're not using the useSSL this time around, you can use apt to install pgLoader along with the postgresql-client package, which will allow you to access the Managed PostgreSQL Database from your MySQL server:

      • sudo apt install pgloader postgresql-client

      Following that, you can run the pgloader command to migrate the database. To do this, you'll need the connection string for the Managed Database.

      For DigitalOcean Managed Databases, you can copy the connection string from the Cloud Control Panel. First, click Databases in the left-hand sidebar menu and select the database to which you want to migrate the data. Then scroll down to the Connection Details section. Click on the drop down menu and select Connection string. Then, click the Copy button to copy the string to your clipboard and paste it into the following migration command, replacing the example PostgreSQL connection string shown here. This will migrate your MySQL database into the defaultdb PostgreSQL database as the doadmin PostgreSQL role:

      • pgloader mysql://root:password@localhost/source_db postgres://doadmin:password@db_host/defaultdb?sslmode=require

      Following this, you can use the same connection string as an argument to psql to connect to the managed PostgreSQL database andhttps://www.digitalocean.com/community/tutorials/how-to-migrate-mysql-database-to-postgres-using-pgloader#step-1-%E2%80%94-(optional)-creating-a-sample-database-and-table-in-mysql confirm that the migration was successful:

      • psql postgres://doadmin:password@db_host/defaultdb?sslmode=require

      Then, run the following query to check that pgLoader correctly migrated the data:

      • SELECT * FROM source_db.sample_table;

      Output

      employee_id | first_name | last_name | start_date | salary -------------+------------+-------------+------------+------------ 1 | Elizabeth | Cotten | 2007-11-11 | $105433.18 2 | Yanka | Dyagileva | 2017-10-30 | $107540.67 3 | Lee | Dorsey | 2013-06-04 | $118024.04 4 | Kasey | Chambers | 2010-08-18 | $116456.98 5 | Bram | Tchaikovsky | 2018-09-16 | $61989.50 (5 rows)

      This confirms that pgLoader successfully migrated your MySQL database to your managed PostgreSQL instance.

      Conclusion

      pgLoader is a flexible tool that can perform a database migration in a single command. With a few configuration tweaks, it can migrate an entire database from one physical machine to another using a secure SSL/TLS connection. Our hope is that by following this tutorial, you will have gained a clearer understanding of pgLoader's capabilities and potential use cases.

      After migrating your data over to PostgreSQL, you may find the following tutorials to be of interest:



      Source link

      How To Configure SSL/TLS for MySQL on Ubuntu 18.04


      Introduction

      MySQL is the most popular open-source relational database management system in the world. While modern package managers have reduced some of the friction to getting MySQL up and running, there is still some further configuration that should be performed after you install it. One of the most important aspects to spend some extra time on is security.

      By default, MySQL is configured to only accept local connections, or connections that originate from the same machine where MySQL is installed. If you need to access your MySQL database from a remote location, it’s important that you do so securely. In this guide, we will demonstrate how to configure MySQL on Ubuntu 18.04 to accept remote connections with SSL/TLS encryption.

      Prerequisites

      To complete this guide, you will need:

      • Two Ubuntu 18.04 servers. We will use one of these servers as the MySQL server while we’ll use the other as the client machine. Create a non-root user with sudo privileges and enable a firewall with ufw on each of these servers. Follow our Ubuntu 18.04 initial server setup guide to get both servers into the appropriate initial state.
      • On one of the machines, install and configure the MySQL server. Follow Steps 1 through 3 of our MySQL installation guide for Ubuntu 18.04 to do this. As you follow this guide, be sure to configure your root MySQL user to authenticate with a password, as described in Step 3 of the guide, as this is necessary to connect to MySQL using TCP rather than the local Unix socket.

      Please note that throughout this guide, the server on which you installed MySQL will be referred to as the MySQL server and any commands that should be run on this machine will be shown with a blue background, like this:

      Similarly, this guide will refer to the other server as the MySQL client and any commands that must be run on that machine will be shown with a red background:

      Please keep these in mind as you follow along with this tutorial so as to avoid any confusion.

      Step 1 — Checking MySQL's Current SSL/TLS Status

      Before you make any configuration changes, you can check the current SSL/TLS status on the MySQL server instance.

      Use the following command to begin a MySQL session as the root MySQL user. This command includes the -p option, which instructs mysql to prompt you for a password in order to log in. It also includes the -h option which is used to specify the host to connect to. In this case it points it to 127.0.0.1, the IPv4 loopback interface also known as localhost. This will force the client to connect with TCP instead of using the local socket file. MySQL attempts to make connections through a Unix socket file by default. This is generally faster and more secure, since these connections can only be made locally and don't have to go through all the checks and routing operations that TCP connections must perform. Connecting with TCP, however, allows us to check the SSL status of the connection:

      • mysql -u root -p -h 127.0.0.1

      You will be prompted for the MySQL root password that you chose when you installed and configured MySQL. After entering it you'll be dropped into an interactive MySQL session.

      Show the state of the SSL/TLS variables issuing the following command:

      • SHOW VARIABLES LIKE '%ssl%';

      Output

      +---------------+----------+ | Variable_name | Value | +---------------+----------+ | have_openssl | DISABLED | | have_ssl | DISABLED | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_crl | | | ssl_crlpath | | | ssl_key | | +---------------+----------+ 9 rows in set (0.01 sec)

      The have_openssl and have_ssl variables are both marked as DISABLED. This means that SSL functionality has been compiled into the server, but that it is not yet enabled.

      Check the status of your current connection to confirm this:

      Output

      -------------- mysql Ver 14.14 Distrib 5.7.26, for Linux (x86_64) using EditLine wrapper Connection id: 9 Current database: Current user: root@localhost SSL: Not in use Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 5.7.26-0ubuntu0.18.04.1 (Ubuntu) Protocol version: 10 Connection: 127.0.0.1 via TCP/IP Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 TCP port: 3306 Uptime: 40 min 11 sec Threads: 1 Questions: 33 Slow queries: 0 Opens: 113 Flush tables: 1 Open tables: 106 Queries per second avg: 0.013 --------------

      As the above output indicates, SSL is not currently in use for this connection, even though you're connected over TCP.

      Close the current MySQL session when you are finished:

      Now that you've confirmed your MySQL server isn't using SSL, you can move on to the next step where you will begin the process of enabling SSL by generating some certificates and keys. These will allow your server and client to communicate with one another securely.

      Step 2 — Generating SSL/TLS Certificates and Keys

      To enable SSL connections to MySQL, you first need to generate the appropriate certificate and key files. MySQL versions 5.7 and above provide a utility called mysql_ssl_rsa_setup that helps simplify this process. The version of MySQL you installed by following the prerequisite MySQL tutorial includes this utility, so we will use it here to generate the necessary files.

      The MySQL process must be able to read the generated files, so use the --uid option to declare mysql as the system user that should own the generated files:

      • sudo mysql_ssl_rsa_setup --uid=mysql

      This will produce output that looks similar to the following:

      Output

      Generating a 2048 bit RSA private key .+++ ..........+++ writing new private key to 'ca-key.pem' ----- Generating a 2048 bit RSA private key ........................................+++ ............+++ writing new private key to 'server-key.pem' ----- Generating a 2048 bit RSA private key .................................+++ ............................................................+++ writing new private key to 'client-key.pem' -----

      These new files will be stored in MySQL's data directory, located by default at /var/lib/mysql. Check the generated files by typing:

      • sudo find /var/lib/mysql -name '*.pem' -ls

      Output

      258930 4 -rw-r--r-- 1 mysql mysql 1107 May 3 16:43 /var/lib/mysql/client-cert.pem 258919 4 -rw-r--r-- 1 mysql mysql 451 May 3 16:43 /var/lib/mysql/public_key.pem 258925 4 -rw------- 1 mysql mysql 1675 May 3 16:43 /var/lib/mysql/server-key.pem 258927 4 -rw-r--r-- 1 mysql mysql 1107 May 3 16:43 /var/lib/mysql/server-cert.pem 258922 4 -rw------- 1 mysql mysql 1675 May 3 16:43 /var/lib/mysql/ca-key.pem 258928 4 -rw------- 1 mysql mysql 1675 May 3 16:43 /var/lib/mysql/client-key.pem 258924 4 -rw-r--r-- 1 mysql mysql 1107 May 3 16:43 /var/lib/mysql/ca.pem 258918 4 -rw------- 1 mysql mysql 1679 May 3 16:43 /var/lib/mysql/private_key.pem

      These files are the key and certificate pairs for the certificate authority (starting with "ca"), the MySQL server process (starting with "server"), and for MySQL clients (starting with "client"). Additionally, the private_key.pem and public_key.pem files are used by MySQL to securely transfer passwords when not using SSL.

      Now that you have the necessary certificate and key files, continue on to enable the use of SSL on your MySQL instance.

      Step 3 — Enabling SSL Connections on the MySQL Server

      Modern versions of MySQL look for the appropriate certificate files within the MySQL data directory whenever the server starts. Because of this, you won't need to modify MySQL’s configuration to enable SSL.

      Instead, enable SSL by restarting the MySQL service:

      • sudo systemctl restart mysql

      After restarting, open up a new MySQL session using the same command as before. The MySQL client will automatically attempt to connect using SSL if it is supported by the server:

      • mysql -u root -p -h 127.0.0.1

      Let's take another look at the same information we requested last time. Check the values of the SSL-related variables:

      • SHOW VARIABLES LIKE '%ssl%';

      Output

      +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | have_openssl | YES | | have_ssl | YES | | ssl_ca | ca.pem | | ssl_capath | | | ssl_cert | server-cert.pem | | ssl_cipher | | | ssl_crl | | | ssl_crlpath | | | ssl_key | server-key.pem | +---------------+-----------------+ 9 rows in set (0.00 sec)

      The have_openssl and have_ssl variables now read YES instead of DISABLED. Furthermore, the ssl_ca, ssl_cert, and ssl_key variables have been populated with the names of the respective files that we just generated.

      Next, check the connection details again:

      Output

      -------------- . . . SSL: Cipher in use is DHE-RSA-AES256-SHA . . . Connection: 127.0.0.1 via TCP/IP . . . --------------

      This time, the specific SSL cipher is displayed, indicating that SSL is being used to secure the connection.

      Exit back out to the shell:

      Your server is now capable of using encryption, but some additional configuration is required to allow remote access and mandate the use of secure connections.

      Step 4 — Configuring Secure Connections for Remote Clients

      Now that you've enabled SSL on the MySQL server, you can begin configuring secure remote access. To do this, you'll configure your MySQL server to require that any remote connections be made over SSL, bind MySQL to listen on a public interface, and adjust your system's firewall rules to allow external connections

      Currently, the MySQL server is configured to accept SSL connections from clients. However, it will still allow unencrypted connections if requested by the client. We can change this by turning on the require_secure_transport option. This requires all connections to be made either with SSL or with a local Unix socket. Since Unix sockets are only accessible from within the server itself, the only connection option available to remote users will be with SSL.

      To enable this setting, open the MySQL configuration file in your preferred text editor. Here, we'll use nano:

      • sudo nano /etc/mysql/my.cnf

      Inside there will be two !includedir directives which are used to source additional configuration files. You must add your own configuration beneath these lines so that it overrides any conflicting settings found in these additional configuration files.

      Start by creating a [mysqld] section to target the MySQL server process. Under that section header, set require_secure_transport to ON, which will force MySQL to only allow secure connections:

      /etc/mysql/my.cnf

      . . .
      
      !includedir /etc/mysql/conf.d/
      !includedir /etc/mysql/mysql.conf.d/
      
      [mysqld]
      # Require clients to connect either using SSL
      # or through a local socket file
      require_secure_transport = ON
      

      By default, MySQL is configured to only listen for connections that originate from 127.0.0.1, the loopback IP address that represents localhost. This means that MySQL is configured to only listen for connections that originate from the machine on which the MySQL server is installed.

      In order to allow MySQL to listen for external connections, you must configure it to listen for connections on an external IP address. To do this, you can add the bind-address setting and point it to 0.0.0.0, a wildcard IP address that represents all IP addresses. Essentially, this will force MySQL to listen for connections on every interface:

      /etc/mysql/my.cnf

      . . .
      
      !includedir /etc/mysql/conf.d/
      !includedir /etc/mysql/mysql.conf.d/
      
      [mysqld]
      # Require clients to connect either using SSL
      # or through a local socket file
      require_secure_transport = ON
      bind-address = 0.0.0.0
      

      Note: You could alternatively set bind-address to your MySQL server's public IP address. However, you would need to remember to update your my.cnf file if you ever migrate your database to another machine.

      After adding these lines, save and close the file. If you used nano to edit the file, you can do so by pressing CTRL+X, Y, then ENTER.

      Next, restart MySQL to apply the new settings:

      • sudo systemctl restart mysql

      Verify that MySQL is listening on 0.0.0.0 instead of 127.0.0.1 by typing:

      The output of this command will look like this:

      Output

      Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 13317/mysqld tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1293/sshd tcp6 0 0 :::22 :::* LISTEN 1293/sshd

      The 0.0.0.0 highlighted in the above output indicates that MySQL is listening for connections on all available interfaces.

      Next, allow MySQL connections through your server's firewall. Add an exception to your ufw rules by typing:

      Output

      Rule added Rule added (v6)

      With that, remote connection attempts are now able to reach your MySQL server. However, you don't currently have any users configured that can connect from a remote machine. We'll create and configure a MySQL user that can connect from your client machine in the next step.

      Step 5 — Creating a Dedicated MySQL User

      At this point, your MySQL server will reject any attempt to connect from a remote client machine. This is because the existing MySQL users are all only configured to connect locally from the MySQL server. To resolve this, you will create a dedicated user that will only be able to connect from your client machine.

      To create such a user, log back into MySQL as the root user:

      From the prompt, create a new remote user with the CREATE USER command. You can name this user whatever you'd like, but in this guide we name it mysql_user. Be sure to specify your client machine's IP address in the host portion of the user specification to restrict connections to that machine and to replace password with a secure password of your choosing. Also, for some redundancy in case the require_secure_transport option is turned off in the future, specify that this user requires SSL by including the REQUIRE SSL clause, as shown here:

      • CREATE USER 'mysql_user'@'your_mysql_client_IP' IDENTIFIED BY 'password' REQUIRE SSL;

      Next, grant the new user permissions on whichever databases or tables that they should have access to. To demonstrate, create an example database:

      Then give your new user access to this database and all of its tables:

      • GRANT ALL ON example.* TO 'mysql_user'@'your_mysql_client_IP';

      Next, flush the privileges to apply those settings immediately:

      Then exit back out to the shell when you are done:

      Your MySQL server is now set up to allow connections from your remote user. To test that you can connect to MySQL successfully, you will need to install the mysql-client package on the MySQL client.

      Log in to your client machine with ssh

      • ssh sammy@your_mysql_client_ip

      Then update the client machine's package index:

      And install mysql-client with the following command:

      • sudo apt install mysql-client

      When prompted, confirm the installation by pressing ENTER.

      Once APT finishes installing the package, run the following command to test whether you can connect to the server successfully. This command includes the -u user option to specify mysql_user and the -h option to specify the MySQL server's IP address:

      • mysql -u mysql_user -p -h your_mysql_server_IP

      After submitting the password, you will be logged in to the remote server. Use s to check the server's status and confirm that your connection is secure:

      Output

      -------------- . . . SSL: Cipher in use is DHE-RSA-AES256-SHA . . . Connection: your_mysql_server_IP via TCP/IP . . . --------------

      Exit back out to the shell:

      You've confirmed that you're able to connect to MySQL over SSL. However, you've not yet confirmed that the MySQL server is rejecting insecure connections. To test this, try connecting once more, but this time append --ssl-mode=disabled to the login command. This will instruct mysql-client to attempt an unencrypted connection:

      • mysql -u mysql_user -p -h mysql_server_IP --ssl-mode=disabled

      After entering your password when prompted, your connection will be refused:

      Output

      ERROR 1045 (28000): Access denied for user 'mysql_user'@'mysql_server_IP' (using password: YES)

      This shows that SSL connections are permitted while unencrypted connections are refused.

      At this point, your MySQL server has been configured to accept secure remote connections. You can stop here if this satisfies your security requirements, but there are some additional pieces that you can put into place to enhance security and trust between your two servers.

      Step 6 — (Optional) Configuring Validation for MySQL Connections

      Currently, your MySQL server is configured with an SSL certificate signed by a locally generated certificate authority (CA). The server's certificate and key pair are enough to provide encryption for incoming connections.

      However, you aren't yet fully leveraging the trust relationship that a certificate authority can provide. By distributing the CA certificate to clients — as well as the client certificate and key — both parties can provide proof that their certificates were signed by a mutually trusted certificate authority. This can help prevent spoofed connections from malicious servers.

      In order to implement this extra, optional safeguard, we will transfer the appropriate SSL files to the client machine, create a client configuration file, and alter the remote MySQL user to require a trusted certificate.

      Note: The process for transferring the CA certificate, client certificate, and client key to the MySQL client outlined in the following paragraphs involves displaying each file's contents with cat, copying those contents to your clipboard, and pasting them in to a new file on the client machine. While it is possible to copy these files directly with a program like scp or sftp, this also requires you to set up SSH keys for both servers so as to allow them to communicate over SSH.

      Our goal here is to keep the number of potential avenues for connecting to your MySQL server down to a minimum. While this process is slightly more laborious than directly transferring the files, it is equally secure and doesn't require you to open an SSH connection between the two machines.

      Begin by making a directory on the MySQL client in the home directory of your non-root user. Call this directory client-ssl:

      Because the certificate key is sensitive, lock down access to this directory so that only the current user can access it:

      On the MySQL server, display the contents of the CA certificate by typing:

      • sudo cat /var/lib/mysql/ca.pem

      Output

      -----BEGIN CERTIFICATE----- . . . -----END CERTIFICATE-----

      Copy the entire output, including the BEGIN CERTIFICATE and END CERTIFICATE lines, to your clipboard.

      On the MySQL client, create a file with the same name inside the new directory:

      Inside, paste the copied certificate contents from your clipboard. Save and close the file when you are finished.

      Next, display the client certificate on the MySQL server:

      • sudo cat /var/lib/mysql/client-cert.pem

      Output

      -----BEGIN CERTIFICATE----- . . . -----END CERTIFICATE-----

      Copy the file contents to your clipboard. Again, remember to include the first and last line.

      Open a file with the same name on the MySQL client within the client-ssl directory:

      • nano ~/client-ssl/client-cert.pem

      Paste the contents from your clipboard. Save and close the file.

      Finally, display the contents of the client key file on the MySQL server:

      • sudo cat /var/lib/mysql/client-key.pem

      Output

      -----BEGIN RSA PRIVATE KEY----- . . . -----END RSA PRIVATE KEY-----

      Copy the displayed contents, including the first and last line, to your clipboard.

      On the MySQL client, open a file with the same name in the client-ssl directory:

      • nano ~/client-ssl/client-key.pem

      Paste the contents from your clipboard. Save and close the file.

      The client machine now has all of the credentials required to access the MySQL server. However, the MySQL server is still not set up to require trusted certificates for client connections.

      To change this, log in to the MySQL root account again on the MySQL server:

      From here, change the security requirements for your remote user. Instead of the REQUIRE SSL clause, apply the REQUIRE X509 clause. This implies all of the security provided by the REQUIRE SSL clause, but additionally requires the connecting client to present a certificate signed by a certificate authority that the MySQL server trusts.

      To adjust the user requirements, use the ALTER USER command:

      • ALTER USER 'mysql_user'@'mysql_client_IP' REQUIRE X509;

      Then flush the changes to ensure that they are applied immediately:

      Exit back out to the shell when you are finished:

      Following that, check whether you can validate both parties when you connect.

      On the MySQL client, first try to connect without providing the client certificates:

      • mysql -u mysql_user -p -h mysql_server_IP

      Output

      ERROR 1045 (28000): Access denied for user 'mysql_user'@'mysql_client_IP' (using password: YES)

      As expected, the server rejects the connection when no client certificate is presented.

      Now, connect while using the --ssl-ca, --ssl-cert, and --ssl-key options to point to the relevant files within the ~/client-ssl directory:

      • mysql -u mysql_user -p -h mysql_server_IP --ssl-ca=~/client-ssl/ca.pem --ssl-cert=~/client-ssl/client-cert.pem --ssl-key=~/client-ssl/client-key.pem

      You've provided the client with the appropriate certificates and keys, so this attempt will be successful:

      Log back out to regain access to your shell session:

      Now that you've confirmed access to the server, let's implement a small usability improvement in order to avoid having to specify the certificate files each time you connect.

      Inside your home directory on the MySQL client machine, create a hidden configuration file called ~/.my.cnf:

      At the top of the file, create a section called [client]. Underneath, add the ssl-ca, ssl-cert, and ssl-key options and point them to the respective files you copied over from the server. It will look like this:

      ~/.my.cnf

      [client]
      ssl-ca = ~/client-ssl/ca.pem
      ssl-cert = ~/client-ssl/client-cert.pem
      ssl-key = ~/client-ssl/client-key.pem
      

      The ssl-ca option tells the client to verify that the certificate presented by the MySQL server is signed by the certificate authority you pointed to. This allows the client to trust that it is connecting to a trusted MySQL server. Likewise, the ssl-cert and ssl-key options point to the files needed to prove to the MySQL server that it too has a certificate that has been signed by the same certificate authority. You'll need this if you want the MySQL server to verify that the client was trusted by the CA as well.

      Save and close the file when you are finished.

      Now, you can connect to the MySQL server without adding the --ssl-ca, --ssl-cert, and --ssl-key options on the command line:

      • mysql -u remote_user -p -h mysql_server_ip

      Your client and server will now each be presenting certificates when negotiating the connection. Each party is configured to verify the remote certificate against the CA certificate it has locally.

      Conclusion

      Your MySQL server is now configured to require secure connections from remote clients. Additionally, if you followed the steps to validate connections using the certificate authority, some level of trust is established by both sides that the remote party is legitimate.



      Source link

      How To Install WordPress With Docker Compose


      Introduction

      WordPress is a free and open-source Content Management System (CMS) built on a MySQL database with PHP processing. Thanks to its extensible plugin architecture and templating system, and the fact that most of its administration can be done through the web interface, WordPress is a popular choice when creating different types of websites, from blogs to product pages to eCommerce sites.

      Running WordPress typically involves installing a LAMP (Linux, Apache, MySQL, and PHP) or LEMP (Linux, Nginx, MySQL, and PHP) stack, which can be time-consuming. However, by using tools like Docker and Docker Compose, you can simplify the process of setting up your preferred stack and installing WordPress. Instead of installing individual components by hand, you can use images, which standardize things like libraries, configuration files, and environment variables, and run these images in containers, isolated processes that run on a shared operating system. Additionally, by using Compose, you can coordinate multiple containers — for example, an application and database — to communicate with one another.

      In this tutorial, you will build a multi-container WordPress installation. Your containers will include a MySQL database, an Nginx web server, and WordPress itself. You will also secure your installation by obtaining TLS/SSL certificates with Let’s Encrypt for the domain you want associated with your site. Finally, you will set up a cron job to renew your certificates so that your domain remains secure.

      Prerequisites

      To follow this tutorial, you will need:

      • A server running Ubuntu 18.04, along with a non-root user with sudo privileges and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
      • Docker installed on your server, following Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04.
      • Docker Compose installed on your server, following Step 1 of How To Install Docker Compose on Ubuntu 18.04.
      • A registered domain name. This tutorial will use example.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:

        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.

      Step 1 — Defining the Web Server Configuration

      Before running any containers, our first step will be to define the configuration for our Nginx web server. Our configuration file will include some WordPress-specific location blocks, along with a location block to direct Let’s Encrypt verification requests to the Certbot client for automated certificate renewals.

      First, create a project directory for your WordPress setup called wordpress and navigate to it:

      • mkdir wordpress && cd wordpress

      Next, make a directory for the configuration file:

      Open the file with nano or your favorite editor:

      • nano nginx-conf/nginx.conf

      In this file, we will add a server block with directives for our server name and document root, and location blocks to direct the Certbot client's request for certificates, PHP processing, and static asset requests.

      Paste the following code into the file. Be sure to replace example.com with your own domain name:

      ~/wordpress/nginx-conf/nginx.conf

      server {
              listen 80;
              listen [::]:80;
      
              server_name example.com www.example.com;
      
              index index.php index.html index.htm;
      
              root /var/www/html;
      
              location ~ /.well-known/acme-challenge {
                      allow all;
                      root /var/www/html;
              }
      
              location / {
                      try_files $uri $uri/ /index.php$is_args$args;
              }
      
              location ~ .php$ {
                      try_files $uri =404;
                      fastcgi_split_path_info ^(.+.php)(/.+)$;
                      fastcgi_pass wordpress:9000;
                      fastcgi_index index.php;
                      include fastcgi_params;
                      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                      fastcgi_param PATH_INFO $fastcgi_path_info;
              }
      
              location ~ /.ht {
                      deny all;
              }
      
              location = /favicon.ico { 
                      log_not_found off; access_log off; 
              }
              location = /robots.txt { 
                      log_not_found off; access_log off; allow all; 
              }
              location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
                      expires max;
                      log_not_found off;
              }
      }
      

      Our server block includes the following information:

      Directives:

      • listen: This tells Nginx to listen on port 80, which will allow us to use Certbot's webroot plugin for our certificate requests. Note that we are not including port 443 yet — we will update our configuration to include SSL once we have successfully obtained our certificates.
      • server_name: This defines your server name and the server block that should be used for requests to your server. Be sure to replace example.com in this line with your own domain name.
      • index: The index directive defines the files that will be used as indexes when processing requests to your server. We've modified the default order of priority here, moving index.php in front of index.html so that Nginx prioritizes files called index.php when possible.
      • root: Our root directive names the root directory for requests to our server. This directory, /var/www/html, is created as a mount point at build time by instructions in our WordPress Dockerfile. These Dockerfile instructions also ensure that the files from the WordPress release are mounted to this volume.

      Location Blocks:

      • location ~ /.well-known/acme-challenge: This location block will handle requests to the .well-known directory, where Certbot will place a temporary file to validate that the DNS for our domain resolves to our server. With this configuration in place, we will be able to use Certbot's webroot plugin to obtain certificates for our domain.
      • location /: In this location block, we'll use a try_files directive to check for files that match individual URI requests. Instead of returning a 404 Not Found status as a default, however, we'll pass control to WordPress's index.php file with the request arguments.
      • location ~ .php$: This location block will handle PHP processing and proxy these requests to our wordpress container. Because our WordPress Docker image will be based on the php:fpm image, we will also include configuration options that are specific to the FastCGI protocol in this block. Nginx requires an independent PHP processor for PHP requests: in our case, these requests will be handled by the php-fpm processor that's included with the php:fpm image.
        Additionally, this location block includes FastCGI-specific directives, variables, and options that will proxy requests to the WordPress application running in our wordpress container, set the preferred index for the parsed request URI, and parse URI requests.
      • location ~ /.ht: This block will handle .htaccess files since Nginx won't serve them. The deny_all directive ensures that .htaccess files will never be served to users.
      • location = /favicon.ico, location = /robots.txt: These blocks ensure that requests to /favicon.ico and /robots.txt will not be logged.
      • location ~* .(css|gif|ico|jpeg|jpg|js|png)$: This block turns off logging for static asset requests and ensures that these assets are highly cacheable, as they are typically expensive to serve.

      For more information about FastCGI proxying, see Understanding and Implementing FastCGI Proxying in Nginx. For information about server and location blocks, see Understanding Nginx Server and Location Block Selection Algorithms.

      Save and close the file when you are finished editing. If you used nano, do so by pressing CTRL+X, Y, then ENTER.

      With your Nginx configuration in place, you can move on to creating environment variables to pass to your application and database containers at runtime.

      Step 2 — Defining Environment Variables

      Your database and WordPress application containers will need access to certain environment variables at runtime in order for your application data to persist and be accessible to your application. These variables include both sensitive and non-sensitive information: sensitive values for your MySQL root password and application database user and password, and non-sensitive information for your application database name and host.

      Rather than setting all of these values in our Docker Compose file — the main file that contains information about how our containers will run — we can set the sensitive values in an .env file and restrict its circulation. This will prevent these values from copying over to our project repositories and being exposed publicly.

      In your main project directory, ~/wordpress, open a file called .env:

      The confidential values that we will set in this file include a password for our MySQL root user, and a username and password that WordPress will use to access the database.

      Add the following variable names and values to the file. Remember to supply your own values here for each variable:

      ~/wordpress/.env

      MYSQL_ROOT_PASSWORD=your_root_password
      MYSQL_USER=your_wordpress_database_user
      MYSQL_PASSWORD=your_wordpress_database_password
      

      We have included a password for the root administrative account, as well as our preferred username and password for our application database.

      Save and close the file when you are finished editing.

      Because your .env file contains sensitive information, you will want to ensure that it is included in your project's .gitignore and .dockerignore files, which tell Git and Docker what files not to copy to your Git repositories and Docker images, respectively.

      If you plan to work with Git for version control, initialize your current working directory as a repository with git init:

      Then open a .gitignore file:

      Add .env to the file:

      ~/wordpress/.gitignore

      .env
      

      Save and close the file when you are finished editing.

      Likewise, it's a good precaution to add .env to a .dockerignore file, so that it doesn't end up on your containers when you are using this directory as your build context.

      Open the file:

      Add .env to the file:

      ~/wordpress/.dockerignore

      .env
      

      Below this, you can optionally add files and directories associated with your application's development:

      ~/wordpress/.dockerignore

      .env
      .git
      docker-compose.yml
      .dockerignore
      

      Save and close the file when you are finished.

      With your sensitive information in place, you can now move on to defining your services in a docker-compose.yml file.

      Step 3 — Defining Services with Docker Compose

      Your docker-compose.yml file will contain the service definitions for your setup. A service in Compose is a running container, and service definitions specify information about how each container will run.

      Using Compose, you can define different services in order to run multi-container applications, since Compose allows you to link these services together with shared networks and volumes. This will be helpful for our current setup since we will create different containers for our database, WordPress application, and web server. We will also create a container to run the Certbot client in order to obtain certificates for our webserver.

      To begin, open the docker-compose.yml file:

      Add the following code to define your Compose file version and db database service:

      ~/wordpress/docker-compose.yml

      version: '3'
      
      services:
        db:
          image: mysql:8.0
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MYSQL_DATABASE=wordpress
          volumes: 
            - dbdata:/var/lib/mysql
          command: '--default-authentication-plugin=mysql_native_password'
          networks:
            - app-network
      

      The db service definition contains the following options:

      • image: This tells Compose what image to pull to create the container. We are pinning the mysql:8.0 image here to avoid future conflicts as the mysql:latest image continues to be updated. For more information about version pinning and avoiding dependency conflicts, see the Docker documentation on Dockerfile best practices.
      • container_name: This specifies a name for the container.
      • restart: This defines the container restart policy. The default is no, but we have set the container to restart unless it is stopped manually.
      • env_file: This option tells Compose that we would like to add environment variables from a file called .env, located in our build context. In this case, the build context is our current directory.
      • environment: This option allows you to add additional environment variables, beyond those defined in your .env file. We will set the MYSQL_DATABASE variable equal to wordpress to provide a name for our application database. Because this is non-sensitive information, we can include it directly in the docker-compose.yml file.
      • volumes: Here, we're mounting a named volume called dbdata to the /var/lib/mysql directory on the container. This is the standard data directory for MySQL on most distributions.
      • command: This option specifies a command to override the default CMD instruction for the image. In our case, we will add an option to the Docker image's standard mysqld command, which starts the MySQL server on the container. This option, --default-authentication-plugin=mysql_native_password, sets the --default-authentication-plugin system variable to mysql_native_password, specifying which authentication mechanism should govern new authentication requests to the server. Since PHP and therefore our WordPress image won't support MySQL's newer authentication default, we must make this adjustment in order to authenticate our application database user.
      • networks: This specifies that our application service will join the app-network network, which we will define at the bottom of the file.

      Next, below your db service definition, add the definition for your wordpress application service:

      ~/wordpress/docker-compose.yml

      ...
        wordpress:
          depends_on: 
            - db
          image: wordpress:5.1.1-fpm-alpine
          container_name: wordpress
          restart: unless-stopped
          env_file: .env
          environment:
            - WORDPRESS_DB_HOST=db:3306
            - WORDPRESS_DB_USER=$MYSQL_USER
            - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
            - WORDPRESS_DB_NAME=wordpress
          volumes:
            - wordpress:/var/www/html
          networks:
            - app-network
      

      In this service definition, we are naming our container and defining a restart policy, as we did with the db service. We're also adding some options specific to this container:

      • depends_on: This option ensures that our containers will start in order of dependency, with the wordpress container starting after the db container. Our WordPress application relies on the existence of our application database and user, so expressing this order of dependency will enable our application to start properly.
      • image: For this setup, we are using the 5.1.1-fpm-alpine WordPress image. As discussed in Step 1, using this image ensures that our application will have the php-fpm processor that Nginx requires to handle PHP processing. This is also an alpine image, derived from the Alpine Linux project, which will help keep our overall image size down. For more information about the benefits and drawbacks of using alpine images and whether or not this makes sense for your application, see the full discussion under the Image Variants section of the Docker Hub WordPress image page.
      • env_file: Again, we specify that we want to pull values from our .env file, since this is where we defined our application database user and password.
      • environment: Here, we're using the values we defined in our .env file, but we're assigning them to the variable names that the WordPress image expects: WORDPRESS_DB_USER and WORDPRESS_DB_PASSWORD. We're also defining a WORDPRESS_DB_HOST, which will be the MySQL server running on the db container that's accessible on MySQL's default port, 3306. Our WORDPRESS_DB_NAME will be the same value we specified in the MySQL service definition for our MYSQL_DATABASE: wordpress.
      • volumes: We are mounting a named volume called wordpress to the /var/www/html mountpoint created by the WordPress image. Using a named volume in this way will allow us to share our application code with other containers.
      • networks: We're also adding the wordpress container to the app-network network.

      Next, below the wordpress application service definition, add the following definition for your webserver Nginx service:

      ~/wordpress/docker-compose.yml

      ...
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      

      Again, we're naming our container and making it dependent on the wordpress container in order of starting. We're also using an alpine image — the 1.15.12-alpine Nginx image.

      This service definition also includes the following options:

      • ports: This exposes port 80 to enable the configuration options we defined in our nginx.conf file in Step 1.
      • volumes: Here, we are defining a combination of named volumes and bind mounts:
        • wordpress:/var/www/html: This will mount our WordPress application code to the /var/www/html directory, the directory we set as the root in our Nginx server block.
        • ./nginx-conf:/etc/nginx/conf.d: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes we make to files on the host will be reflected in the container.
        • certbot-etc:/etc/letsencrypt: This will mount the relevant Let's Encrypt certificates and keys for our domain to the appropriate directory on the container.

      And again, we've added this container to the app-network network.

      Finally, below your webserver definition, add your last service definition for the certbot service. Be sure to replace the email address and domain names listed here with your own information:

      ~/wordpress/docker-compose.yml

        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email [email protected] --agree-tos --no-eff-email --staging -d example.com -d www.example.com
      

      This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc and the application code in wordpress.

      Again, we've used depends_on to specify that the certbot container should be started once the webserver service is running.

      We've also included a command option that specifies a subcommand to run with the container's default certbot command. The certonly subcommand will obtain a certificate with the following options:

      • --webroot: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.
      • --webroot-path: This specifies the path of the webroot directory.
      • --email: Your preferred email for registration and recovery.
      • --agree-tos: This specifies that you agree to ACME's Subscriber Agreement.
      • --no-eff-email: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.
      • --staging: This tells Certbot that you would like to use Let's Encrypt's staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please see Let's Encrypt's rate limits documentation.
      • -d: This allows you to specify domain names you would like to apply to your request. In this case, we've included example.com and www.example.com. Be sure to replace these with your own domain.

      Below the certbot service definition, add your network and volume definitions:

      ~/wordpress/docker-compose.yml

      ...
      volumes:
        certbot-etc:
        wordpress:
        dbdata:
      
      networks:
        app-network:
          driver: bridge  
      

      Our top-level volumes key defines the volumes certbot-etc, wordpress, and dbdata. When Docker creates volumes, the contents of the volume are stored in a directory on the host filesystem, /var/lib/docker/volumes/, that's managed by Docker. The contents of each volume then get mounted from this directory to any container that uses the volume. In this way, it's possible to share code and data between containers.

      The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network without exposing any ports to the outside world. Thus, our db, wordpress, and webserver containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

      The finished docker-compose.yml file will look like this:

      ~/wordpress/docker-compose.yml

      version: '3'
      
      services:
        db:
          image: mysql:8.0
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MYSQL_DATABASE=wordpress
          volumes: 
            - dbdata:/var/lib/mysql
          command: '--default-authentication-plugin=mysql_native_password'
          networks:
            - app-network
      
        wordpress:
          depends_on: 
            - db
          image: wordpress:5.1.1-fpm-alpine
          container_name: wordpress
          restart: unless-stopped
          env_file: .env
          environment:
            - WORDPRESS_DB_HOST=db:3306
            - WORDPRESS_DB_USER=$MYSQL_USER
            - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
            - WORDPRESS_DB_NAME=wordpress
          volumes:
            - wordpress:/var/www/html
          networks:
            - app-network
      
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      
        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email [email protected] --agree-tos --no-eff-email --staging -d example.com -d www.example.com
      
      volumes:
        certbot-etc:
        wordpress:
        dbdata:
      
      networks:
        app-network:
          driver: bridge  
      

      Save and close the file when you are finished editing.

      With your service definitions in place, you are ready to start the containers and test your certificate requests.

      Step 4 — Obtaining SSL Certificates and Credentials

      We can start our containers with the docker-compose up command, which will create and run our containers in the order we have specified. If our domain requests are successful, we will see the correct exit status in our output and the right certificates mounted in the /etc/letsencrypt/live folder on the webserver container.

      Create the containers with docker-compose up and the -d flag, which will run the db, wordpress, and webserver containers in the background:

      You will see output confirming that your services have been created:

      Output

      Creating db ... done Creating wordpress ... done Creating webserver ... done Creating certbot ... done

      Using docker-compose ps, check the status of your services:

      If everything was successful, your db, wordpress, and webserver services will be Up and the certbot container will have exited with a 0 status message:

      Output

      Name Command State Ports ------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp wordpress docker-entrypoint.sh php-fpm Up 9000/tcp

      If you see anything other than Up in the State column for the db, wordpress, or webserver services, or an exit status other than 0 for the certbot container, be sure to check the service logs with the docker-compose logs command:

      • docker-compose logs service_name

      You can now check that your certificates have been mounted to the webserver container with docker-compose exec:

      • docker-compose exec webserver ls -la /etc/letsencrypt/live

      If your certificate requests were successful, you will see output like this:

      Output

      total 16 drwx------ 3 root root 4096 May 10 15:45 . drwxr-xr-x 9 root root 4096 May 10 15:45 .. -rw-r--r-- 1 root root 740 May 10 15:45 README drwxr-xr-x 2 root root 4096 May 10 15:45 example.com

      Now that you know your request will be successful, you can edit the certbot service definition to remove the --staging flag.

      Open docker-compose.yml:

      Find the section of the file with the certbot service definition, and replace the --staging flag in the command option with the --force-renewal flag, which will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot service definition will now look like this:

      ~/wordpress/docker-compose.yml

      ...
        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email [email protected] --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
      ...
      

      You can now run docker-compose up to recreate the certbot container. We will also include the --no-deps option to tell Compose that it can skip starting the webserver service, since it is already running:

      • docker-compose up --force-recreate --no-deps certbot

      You will see output indicating that your certificate request was successful:

      Output

      Recreating certbot ... done Attaching to certbot certbot | Saving debug log to /var/log/letsencrypt/letsencrypt.log certbot | Plugins selected: Authenticator webroot, Installer None certbot | Renewing an existing certificate certbot | Performing the following challenges: certbot | http-01 challenge for example.com certbot | http-01 challenge for www.example.com certbot | Using the webroot path /var/www/html for all unmatched domains. certbot | Waiting for verification... certbot | Cleaning up challenges certbot | IMPORTANT NOTES: certbot | - Congratulations! Your certificate and chain have been saved at: certbot | /etc/letsencrypt/live/example.com/fullchain.pem certbot | Your key file has been saved at: certbot | /etc/letsencrypt/live/example.com/privkey.pem certbot | Your cert will expire on 2019-08-08. To obtain a new or tweaked certbot | version of this certificate in the future, simply run certbot certbot | again. To non-interactively renew *all* of your certificates, run certbot | "certbot renew" certbot | - Your account credentials have been saved in your Certbot certbot | configuration directory at /etc/letsencrypt. You should make a certbot | secure backup of this folder now. This configuration directory will certbot | also contain certificates and private keys obtained by Certbot so certbot | making regular backups of this folder is ideal. certbot | - If you like Certbot, please consider supporting our work by: certbot | certbot | Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate certbot | Donating to EFF: https://eff.org/donate-le certbot | certbot exited with code 0

      With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.

      Step 5 — Modifying the Web Server Configuration and Service Definition

      Enabling SSL in our Nginx configuration will involve adding an HTTP redirect to HTTPS, specifying our SSL certificate and key locations, and adding security parameters and headers.

      Since you are going to recreate the webserver service to include these additions, you can stop it now:

      • docker-compose stop webserver

      Before we modify the configuration file itself, let's first get the recommended Nginx security parameters from Certbot using curl:

      • curl -sSLo nginx-conf/options-ssl-nginx.conf https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf

      This command will save these parameters in a file called options-ssl-nginx.conf, located in the nginx-conf directory.

      Next, remove the Nginx configuration file you created earlier:

      Open another version of the file:

      • nano nginx-conf/nginx.conf

      Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace example.com with your own domain:

      ~/wordpress/nginx-conf/nginx.conf

      server {
              listen 80;
              listen [::]:80;
      
              server_name example.com www.example.com;
      
              location ~ /.well-known/acme-challenge {
                      allow all;
                      root /var/www/html;
              }
      
              location / {
                      rewrite ^ https://$host$request_uri? permanent;
              }
      }
      
      server {
              listen 443 ssl http2;
              listen [::]:443 ssl http2;
              server_name example.com www.example.com;
      
              index index.php index.html index.htm;
      
              root /var/www/html;
      
              server_tokens off;
      
              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
      
              include /etc/nginx/conf.d/options-ssl-nginx.conf;
      
              add_header X-Frame-Options "SAMEORIGIN" always;
              add_header X-XSS-Protection "1; mode=block" always;
              add_header X-Content-Type-Options "nosniff" always;
              add_header Referrer-Policy "no-referrer-when-downgrade" always;
              add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
              # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
              # enable strict transport security only if you understand the implications
      
              location / {
                      try_files $uri $uri/ /index.php$is_args$args;
              }
      
              location ~ .php$ {
                      try_files $uri =404;
                      fastcgi_split_path_info ^(.+.php)(/.+)$;
                      fastcgi_pass wordpress:9000;
                      fastcgi_index index.php;
                      include fastcgi_params;
                      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                      fastcgi_param PATH_INFO $fastcgi_path_info;
              }
      
              location ~ /.ht {
                      deny all;
              }
      
              location = /favicon.ico { 
                      log_not_found off; access_log off; 
              }
              location = /robots.txt { 
                      log_not_found off; access_log off; allow all; 
              }
              location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
                      expires max;
                      log_not_found off;
              }
      }
      

      The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.

      The HTTPS server block enables ssl and http2. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please see the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04.

      This block also includes our SSL certificate and key locations, along with the recommended Certbot security parameters that we saved to nginx-conf/options-ssl-nginx.conf.

      Additionally, we've included some security headers that will enable us to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options, X-Content-Type-Options, Referrer Policy, Content-Security-Policy, and X-XSS-Protection. The HTTP Strict Transport Security (HSTS) header is commented out — enable this only if you understand the implications and have assessed its "preload" functionality.

      Our root and index directives are also located in this block, as are the rest of the WordPress-specific location blocks discussed in Step 1.

      Once you have finished editing, save and close the file.

      Before recreating the webserver service, you will need to add a 443 port mapping to your webserver service definition.

      Open your docker-compose.yml file:

      In the webserver service definition, add the following port mapping:

      ~/wordpress/docker-compose.yml

      ...
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      

      The docker-compose.yml file will look like this when finished:

      ~/wordpress/docker-compose.yml

      version: '3'
      
      services:
        db:
          image: mysql:8.0
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MYSQL_DATABASE=wordpress
          volumes: 
            - dbdata:/var/lib/mysql
          command: '--default-authentication-plugin=mysql_native_password'
          networks:
            - app-network
      
        wordpress:
          depends_on: 
            - db
          image: wordpress:5.1.1-fpm-alpine
          container_name: wordpress
          restart: unless-stopped
          env_file: .env
          environment:
            - WORDPRESS_DB_HOST=db:3306
            - WORDPRESS_DB_USER=$MYSQL_USER
            - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
            - WORDPRESS_DB_NAME=wordpress
          volumes:
            - wordpress:/var/www/html
          networks:
            - app-network
      
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      
        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email [email protected] --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
      
      volumes:
        certbot-etc:
        wordpress:
        dbdata:
      
      networks:
        app-network:
          driver: bridge  
      

      Save and close the file when you are finished editing.

      Recreate the webserver service:

      • docker-compose up -d --force-recreate --no-deps webserver

      Check your services with docker-compose ps:

      You should see output indicating that your db, wordpress, and webserver services are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp wordpress docker-entrypoint.sh php-fpm Up 9000/tcp

      With your containers running, you can now complete your WordPress installation through the web interface.

      Step 6 — Completing the Installation Through the Web Interface

      With our containers running, we can finish the installation through the WordPress web interface.

      In your web browser, navigate to your server's domain. Remember to substitute example.com here with your own domain name:

      https://example.com
      

      Select the language you would like to use:

      WordPress Language Selector

      After clicking Continue, you will land on the main setup page, where you will need to pick a name for your site and a username. It's a good idea to choose a memorable username here (rather than "admin") and a strong password. You can use the password that WordPress generates automatically or create your own.

      Finally, you will need to enter your email address and decide whether or not you want to discourage search engines from indexing your site:

      WordPress Main Setup Page

      Clicking on Install WordPress at the bottom of the page will take you to a login prompt:

      WordPress Login Screen

      Once logged in, you will have access to the WordPress administration dashboard:

      WordPress Main Admin Dashboard

      With your WordPress installation complete, you can now take steps to ensure that your SSL certificates will renew automatically.

      Step 7 — Renewing Certificates

      Let's Encrypt certificates are valid for 90 days, so you will want to set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron scheduling utility. In this case, we will create a cron job to periodically run a script that will renew our certificates and reload our Nginx configuration.

      First, open a script called ssl_renew.sh:

      Add the following code to the script to renew your certificates and reload your web server configuration. Remember to replace the example username here with your own non-root username:

      ~/wordpress/ssl_renew.sh

      #!/bin/bash
      
      COMPOSE="/usr/local/bin/docker-compose --no-ansi"
      
      cd /home/sammy/wordpress/
      $COMPOSE run certbot renew --dry-run && $COMPOSE kill -s SIGHUP webserver
      

      This script first assigns the docker-compose binary to a variable called COMPOSE, and specifies the --no-ansi option, which will run docker-compose commands without ANSI control characters. It then changes to the ~/wordpress project directory and runs the following docker-compose commands:

      • docker-compose run: This will start a certbot container and override the command provided in our certbot service definition. Instead of using the certonly subcommand, we're using the renew subcommand here, which will renew certificates that are close to expiring. We've included the --dry-run option here to test our script.
      • docker-compose kill: This will send a SIGHUP signal to the webserver container to reload the Nginx configuration. For more information on using this process to reload your Nginx configuration, please see this Docker blog post on deploying the official Nginx image with Docker.

      Close the file when you are finished editing. Make it executable:

      Next, open your root crontab file to run the renewal script at a specified interval:

      If this is your first time editing this file, you will be asked to choose an editor:

      Output

      no crontab for root - using an empty one Select an editor. To change later, run 'select-editor'. 1. /bin/nano <---- easiest 2. /usr/bin/vim.basic 3. /usr/bin/vim.tiny 4. /bin/ed Choose 1-4 [1]: ...

      At the bottom of the file, add the following line:

      crontab

      ...
      */5 * * * * /home/sammy/wordpress/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      This will set the job interval to every five minutes, so you can test whether or not your renewal request has worked as intended. We have also created a log file, cron.log, to record relevant output from the job.

      After five minutes, check cron.log to see whether or not the renewal request has succeeded:

      • tail -f /var/log/cron.log

      You should see output confirming a successful renewal:

      Output

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      You can now modify the crontab file to set a daily interval. To run the script every day at noon, for example, you would modify the last line of the file to look like this:

      crontab

      ...
      0 12 * * * /home/sammy/wordpress/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      You will also want to remove the --dry-run option from your ssl_renew.sh script:

      ~/wordpress/ssl_renew.sh

      #!/bin/bash
      
      COMPOSE="/usr/local/bin/docker-compose --no-ansi"
      
      cd /home/sammy/wordpress/
      $COMPOSE run certbot renew && $COMPOSE kill -s SIGHUP webserver
      

      Your cron job will ensure that your Let's Encrypt certificates don't lapse by renewing them when they are eligible. You can also set up log rotation with the Logrotate utility to rotate and compress your log files.

      Conclusion

      In this tutorial, you used Docker Compose to create a WordPress installation with an Nginx web server. As part of this workflow, you obtained TLS/SSL certificates for the domain you want associated with your WordPress site. Additionally, you created a cron job to renew these certificates when necessary.

      As additional steps to improve site performance and redundancy, you can consult the following articles on delivering and backing up WordPress assets:

      If you are interested in exploring a containerized workflow with Kubernetes, you can also check out:



      Source link