One place for hosting & domains

      Database

      How To Audit a PostgreSQL Database with InSpec on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      InSpec is an open-source, automated testing framework for testing and auditing your system to ensure the compliance of integration, security, and other policy requirements. Developers can test the actual state of their infrastructure and applications against a target state using InSpec code.

      To specify the policy requirements you’re testing for, InSpec includes audit controls. Traditionally, developers manually enforce policy requirements and often do this right before deploying changes to production. With InSpec however, developers can continuously evaluate compliance at every stage of product development, which aids in solving issues earlier in the process of development. The InSpec DSL (Domain Specific Language) built on RSpec, a DSL testing tool written in Ruby, specifies the syntax used to write the audit controls.

      InSpec also includes a collection of resources to assist in configuring specific parts of your system and to simplify making audit controls. There is a feature to write your own custom resources when you need to define a specific solution that isn’t available. Universal matchers allow you to compare resource values to expectations across all InSpec tests.

      In this tutorial, you’ll install InSpec on a server running Ubuntu 18.04. You will start by writing a test that verifies the operating system family of the server, then you’ll create a PostgreSQL audit profile from the ground up. This audit profile starts by checking that you have PostgreSQL installed on the server and that its services are running. Then you’ll add tests to check that the PostgreSQL service is running with the correct port, address, protocol, and user. Next you’ll test specific PostgreSQL configuration parameters, and finally, you’ll audit client authentication configuration.

      Prerequisites

      Before following this tutorial, you will need the following:

      Step 1 — Preparing the Environment

      In this step, you’ll download and unpack the latest stable version of InSpec into your home directory. InSpec provides installable binaries on their downloads page.

      Navigate to your home directory:

      Now download the binary with curl:

      • curl -LO https://packages.chef.io/files/stable/inspec/3.7.11/ubuntu/18.04/inspec_3.7.11-1<^>_amd64.deb

      Next, use the sha256sum command to generate a checksum of the downloaded file. This is to verify the integrity and authenticity of the downloaded file.

      • sha256sum inspec_3.7.11-1_amd64.deb

      Checksums for each binary are listed on the InSpec downloads page, so visit the downloads page to compare with your output from this command.

      Output

      e665948f9c0441e8648b08f8d3c8d34a86f9e994609877a7e4853c012dbc7523 inspec_3.7.11-1_amd64.deb

      If the checksums are different, delete the downloaded file and repeat the download process.

      Next, you'll install the downloaded binary. For this, you'll use the dpkg command that you can use for package management, and which comes with all Debian-based systems, such as Ubuntu, by default. The -i flag prompts the dpkg command to install the package files.

      • sudo dpkg -i inspec_3.7.11-1_amd64.deb

      If there are no errors, it means that you've installed InSpec successfully. To verify the installation, enter the following command:

      You'll receive output showing the version of InSpec you just installed:

      Output

      3.7.11

      If you don't see a version number displayed, run over step 1 again.

      After this, you can delete inspec_3.7.11-1_amd64.deb since you don't need it anymore as you've installed the package:

      • rm inspec_3.7.11-1_amd64.deb

      You've successfully installed InSpec on your server. In the next step, you will write a test to verify the operating system family of your server.

      Step 2 — Completing Your First InSpec Test

      In this step, you'll complete your first InSpec test, which will be testing that your operating system family is debian.

      You will use the os resource, which is a built-in InSpec audit resource to test the platform on which the system is running. You'll also use the eq matcher. The eq matcher is a universal matcher that tests for the exact equality of two values.

      An InSpec test consists of a describe block, which contains one or more it and its statements each of which validates one of the resource's features. Each statement describes an expectation of a specific condition of the system as assertions. Two keywords that you can include to make an assertion are should and should_not, which assert that the condition should be true and false respectively.

      Create a file called os_family.rb to hold your test and open it with your text editor:

      Add the following to your file:

      os_family.rb

      describe os.family do
        it {should eq 'debian'}
      end
      

      This test ensures that the operating system family of the target system is debian. Other possible values are windows, unix, bsd, and so on. You can find a complete list in the os resource documentation. Save and exit the file.

      Next, run your test with the following command:

      The test will pass, and you'll receive output resembling the following:

      Output

      Profile: tests from os_family.rb (tests from os_family.rb) Version: (not specified) Target: local:// debian ✔ should eq "debian" Test Summary: 1 successful, 0 failures, 0 skipped

      In your output, the Profile contains the name of the profile that just executed. Since this test is not included in a profile, InSpec generates a default profile name from the test's file name tests from os_family.rb. (You'll work with InSpec profiles in the next section where you will start building your PostgreSQL InSpec profile.) Here InSpec presents the Version as not specified, because you can only specify versions in profiles.

      The Target field specifies the target system that the test is executed on, which can be local or a remote system via ssh. In this case, you've executed your test on the local system so the target shows local://.

      Usefully, the output also displays the executed test with a checkmark symbol (✔) to the left indicating a successful test. The output will show a cross symbol (✘) if the test fails.

      Finally, the test summary gives overall details about how many tests were successful, failed, and skipped. In this instance, you had a single successful test.

      Now you'll see what the output looks like for a failed test. Open os_family.rb:

      In the test you created earlier in this step, you'll now change the expected value of the operating system family from debian to windows. Your file contents after this will be the following:

      os_family.rb

      describe os.family do
        it {should eq 'windows'}
      end
      

      Save and exit the file.

      Next, run the updated test with the following command:

      You will get output similar to the following:

      Output

      Profile: tests from os_family.fail.rb (tests from os_family.fail.rb) Version: (not specified) Target: local:// debian (✘) should eq "windows" expected: "windows" got: "debian" (compared using ==) Test Summary: 0 successful, 1 failure, 0 skipped

      As expected, the test failed. The output indicates that your expected (windows) and actual (debian) values do not match for the os.family property. The (compared using ==) output indicates that the eq matcher performed a string comparison between the two values to come up with this result.

      In this step, you've written a successful test that verifies the operating system family of the server. You've also created a failed test in order to see what the InSpec output for a failed test looks like. In the next step, you will start building the audit profile to test your PostgreSQL installation.

      Step 3 — Auditing Your PostgreSQL Installation

      Now, you will audit your PostgreSQL installation. You'll start by checking that you have PostgreSQL installed and its service is running correctly. Finally, you'll audit the PostgreSQL system port and process. For your PostgreSQL audit, you will create various InSpec controls, all within an InSpec profile named PostgreSQL.

      An InSpec control is a high-level grouping of related tests. Within a control, you can have multiple describe blocks, as well as metadata to describe your tests such as impact level, title, description, and tags. InSpec profiles organize controls to support dependency management and code reuse, which both help manage test complexity. They are also useful for packaging and sharing tests with the public via the Chef Supermarket. You can use profiles to define custom resources that you would implement as regular Ruby classes.

      To create an InSpec profile, you will use the init command. Enter this command to create the PostgreSQL profile:

      • inspec init profile PostgreSQL

      This creates the profile in a new directory with the same name as your profile, in this case PostgreSQL. Now, move into the new directory:

      The directory structure will look like this:

      PostgreSQL/
      ├── controls
      │   └── example.rb
      ├── inspec.yml
      ├── libraries
      └── README.md
      

      The controls/example.rb file contains a sample control that tests to see if the /tmp folder exists on the target system. This is present only as a sample and you will replace it with your own test.

      Your first test will be to ensure that you have the package postgresql-10 installed on your system and that you have the postgresql service installed, enabled, and running.

      Rename the controls/example.rb file to controls/postgresql.rb:

      • mv controls/example.rb controls/postgresql.rb

      Next, open the file with your text editor:

      • nano controls/postgresql.rb

      Replace the content of the file with the following:

      controls/postgresql.rb

      control '1-audit_installation' do
        impact 1.0
        title 'Audit PostgreSQL Installation'
        desc 'Postgres should be installed and running'
      
        describe package('postgresql-10') do
          it {should be_installed}
          its('version') {should cmp >= '10'}
        end
      
        describe service('postgresql@10-main') do
          it {should be_enabled}
          it {should be_installed}
          it {should be_running}
        end
      end
      

      In the preceding code block, you begin by defining the control with its name and metadata.

      In the first describe block, you use the package resource and pass in the PostgreSQL package name postgresql-10 as a resource argument. The package resource provides the matcher be_installed to test that the named package is installed on the system. It returns true if you have the package installed, and false otherwise. Next, you used the its statement to validate that the version of the installed PostgreSQL package is at least 10. You are using cmp instead of eq because package version strings usually contain other attributes apart from the numerical version. eq returns true only if there is an exact match while cmp is less-restrictive.

      In the second describe block, you use the service resource and pass in the PostgreSQL 10 service name postgresql@10-main as a resource argument. The service resource provides the matchers be_enabled, be_installed, and be_running and they return true if you have the named service installed, enabled, and running on the target system respectively.

      Save and exit your file.

      Next, you will run your profile. Make sure you're in the ~/PostgreSQL directory before running the following command:

      Since you completed the PostgreSQL prerequisite tutorial, your test will pass. Your output will look similar to the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running Profile Summary: 1 successful control, 0 control failures, 0 controls skipped Test Summary: 5 successful, 0 failures, 0 skipped

      The output indicates that your control was successful. A control is successful if, and only if, all the tests in it are successful. The output also confirms that all your tests were successful.

      Now that you've verified that the correct version of PostgreSQL is installed and the service is fine, you will create a new control that ensures that PostgreSQL is listening on the correct port, address, and protocol.

      For this test, you will also use attributes. An InSpec attribute is used to parameterize a profile to enable easy re-use in different environments or target systems. You'll define the PORT attribute.

      Open the inspec.yml file in your text editor:

      You'll append the port attribute to the end of the file. Add the following at the end of your file:

      inspec.yml

      ...
      attributes:
        - name: port
          type: string
          default: '5432'
      

      In the preceding code block, you added the port attribute and set it to a default value of 5432 because that is the port PostgreSQL listens on by default.

      Save and exit the file. Then run inspec check to verify the profile is still valid since you just edited inspec.yml:

      If there are no errors, you can proceed. Otherwise, open the inspec.yml file and ensure that the attribute is present at the end of the file.

      Now you'll create the control that checks that the PostgreSQL process is running and configured with the correct user. Open controls/postgresql.rb in your text editor:

      • nano controls/postgresql.rb

      Append the following control to the end of your current tests file controls/postgresql.rb:

      controls/postgresql.rb

      ...
      PORT = attribute('port')
      
      control '2-audit_address_port' do
        impact 1.0
        title 'Audit Process and Port'
        desc 'Postgres port should be listening and the process should be running'
      
        describe port(PORT) do
          it {should be_listening}
          its('addresses') {should include '127.0.0.1'}
          its('protocols') {should cmp 'tcp'}
        end
      
        describe processes('postgres') do
          it {should exist}
          its('users') {should include 'postgres'}
        end
      
        describe user('postgres') do
          it {should exist}
        end
      end
      

      Here you begin by declaring a PORT variable to hold the value of the port profile attribute. Then you declare the control and its metadata.

      In the first describe block, you include the port resource to test basic port properties. The port resource provides the matchers be_listening, addresses, and protocols. You use the be_listening matcher to test that the named port is listening on the target system. It returns true if the port 5432 is listening and returns false otherwise. The addresses matcher tests if the specified address is associated with the port. In this case, PostgreSQL will be listening on the local address, 127.0.0.1.
      The protocols matcher tests the Internet protocol the port is listening for, which can be icmp, tcp/tcp6, or udp/udp6. PostgreSQL will be listening for tcp connections.

      In the second describe block, you include the processes resource. You use the processes resource to test properties for programs that are running on the system. First, you verify that the postgres process exists on the system, then you use the users matcher to test that the postgres user owns the postgres process.

      In the third describe block, you have the user resource. You include the user resource to test user properties for a user such as whether the user exists or not, the group the user belongs to, and so on. Using this resource, you test that the postgres user exists on the system. Save and exit controls/postgresql.rb.

      Next, run your profile with the following command:

      The tests will pass, and your output will resemble the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running ✔ 2-audit_address_port: Audit Process and Port ✔ Port 5432 should be listening ✔ Port 5432 addresses should include "127.0.0.1" ✔ Port 5432 protocols should cmp == "tcp" ✔ Processes postgres should exist ✔ Processes postgres users should include "postgres" ✔ User postgres should exist Profile Summary: 2 successful controls, 0 control failures, 0 controls skipped Test Summary: 11 successful, 0 failures, 0 skipped

      The output indicates that both of your controls and all of your tests were successful.

      In this section, you have created your first InSpec profile and control and used them to organize your tests. You've used several InSpec resources to ensure that you have the correct version of PostgreSQL installed, the PostgreSQL service enabled and running correctly, and that the PostgreSQL user exists on the system. With this set up you're ready to audit your configuration.

      Step 4 — Auditing Your PostgreSQL Configuration

      In this step, you'll audit some PostgreSQL configuration values, which will give you a foundation for working with these configuration files, allowing you to audit any PostgreSQL configuration parameters as desired.

      Now that you have tests auditing the PostgreSQL installation, you'll audit your PostgreSQL configuration itself. PostgreSQL has several configuration parameters that you can use to tune it as desired, and these are stored in the configuration file located by default at /etc/postgresql/10/main/postgresql.conf. You could have different requirements regarding PostgreSQL configuration for your various deployments such as logging, password encryption, SSL, and replication strategies — these requirements you specify in the configuration file.

      You will be using the postgres_conf resource that tests for specific, named configuration options against expected values in the contents of the PostgreSQL configuration file.

      This test will assume some non-default PostgreSQL configuration values that you'll set manually.

      Open the PostgreSQL configuration file in your favorite text editor:

      • sudo nano /etc/postgresql/10/main/postgresql.conf

      Set the following configuration values. If the option already exists in the file but is commented out, uncomment it by removing the #, and set the value as provided:

      /etc/postgresql/10/main/postgresql.conf

      password_encryption = scram-sha-256
      logging_collector = on
      log_connections = on
      log_disconnections = on
      log_duration = on
      

      The configuration values you have set:

      • Ensure that saved passwords are always encrypted with the scram-sha-256 algorithm.
      • Enable the logging collector, which is a background process that captures log messages from the standard error (stderr) and redirects them to a log file.
      • Enable logging of connection attempts to the PostgreSQL server as well as successful connections.
      • Enable logging of session terminations.
      • Enable logging of the duration of every completed statement.

      Save and exit the configuration file. Then restart the PostgreSQL service:

      • sudo service postgresql@10-main restart

      You'll test for only a few configuration options, but you can test any PostgreSQL configuration option with the postgres_conf resource.

      You will pass in your PostgreSQL configuration directory, which is at /etc/postgresql/10/main, using a new profile attribute, postgres_conf_dir. This configuration directory is not the same across all operating systems and platforms, so by passing it in as a profile attribute, you'll be making this profile easier to reuse in different environments.

      Open your inspec.yml file:

      Add this new attribute to the attributes section of inspec.yml:

      inspec.yml

      ...
        - name: postgres_conf_dir
          type: string
          default: '/etc/postgresql/10/main'
      

      Save and exit your file. Then run the following command to verify the InSpec profile is still valid because you just edited the inspec.yml:

      If there are no errors, you can proceed. Otherwise, open the inspec.yml file and ensure that the above lines are present at the end of the file.

      Now you will create the control that audits the configuration values you are enforcing. Append the following control to the end of the tests file controls/postgresql.rb:

      controls/postgresql.rb

      ...
      POSTGRES_CONF_DIR = attribute('postgres_conf_dir')
      POSTGRES_CONF_PATH = File.join(POSTGRES_CONF_DIR, 'postgresql.conf')
      
      control '3-postgresql' do
        impact 1.0
        title 'Audit PostgreSQL Configuration'
        desc 'Audits specific configuration options'
      
        describe postgres_conf(POSTGRES_CONF_PATH) do
          its('port') {should eq PORT}
          its('password_encryption') {should eq 'scram-sha-256'}
          its('ssl') {should eq 'on'}
          its('logging_collector') {should eq 'on'}
          its('log_connections') {should eq 'on'}
          its('log_disconnections') {should eq 'on'}
          its('log_duration') {should eq 'on'}
        end
      end
      

      Here you define two variables:

      • POSTGRES_CONF_DIR holds the postgres_conf_dir attribute as defined in the profile configuration.
      • POSTGRES_CONF_PATH holds the absolute path of the configuration file by concatenating the configuration file name with the configuration directory using File.join.

      Next, you define the control with its name and metadata. Then you use the postgres_conf resource together with the eq matcher to ensure your required values for the configuration options are correct. Save and exit controls/postgresql.rb.

      Next, you will run the test with the following command:

      The tests will pass, and your outputs will resemble the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running ✔ 2-audit_address_port: Audit Process and Port ✔ Port 5432 should be listening ✔ Port 5432 addresses should include "127.0.0.1" ✔ Port 5432 protocols should cmp == "tcp" ✔ Processes postgres should exist ✔ Processes postgres users should include "postgres" ✔ User postgres should exist ✔ 3-postgresql: Audit PostgreSQL Configuration ✔ PostgreSQL Configuration port should eq "5432" ✔ PostgreSQL Configuration password_encryption should eq "scram-sha-256" ✔ PostgreSQL Configuration ssl should eq "on" ✔ PostgreSQL Configuration logging_collector should eq "on" ✔ PostgreSQL Configuration log_connections should eq "on" ✔ PostgreSQL Configuration log_disconnections should eq "on" ✔ PostgreSQL Configuration log_duration should eq "on" Profile Summary: 3 successful controls, 0 control failures, 0 controls skipped Test Summary: 18 successful, 0 failures, 0 skipped

      The output indicates that your three controls and all your tests were successful without any skipped tests or controls.

      In this step, you've added a new InSpec control that tests specific PostgreSQL configuration values from the configuration file using the postgres_conf resource. You audited a few values in this section, but you can use it to test any configuration option from the configuration file.

      Step 5 — Auditing PostgreSQL Client Authentication

      Now that you've written some tests for your PostgreSQL configuration, you'll write some tests for client authentication. This is important for installations that need to ensure specific authentication methods for different kinds of users; for example, to ensure clients connecting to PostgreSQL locally always need to authenticate with a password, or to reject connections from a specific IP address or IP address range, and so on.

      An important configuration for PostgreSQL installations where security is a concern is to only allow encrypted password authentications. PostgreSQL 10 supports two password encryption methods for client authentication: md5 and scram-sha-256. This test will require password encryption for all clients so this means that the METHOD field for all clients in the client configuration file must be set to either md5 or scram-sha-256. For these tests, you will use scram-sha-256 since it is more secure than md5.

      By default, local clients have their peer authentication method in the pg_hba.conf file. For the test, you need to change these to scram-sha-256. Open the /etc/postgresql/10/main/pg_hba.conf file:

      • sudo nano /etc/postgresql/10/main/pg_hba.conf

      The top of the file contains comments. Scroll down and look for uncommented lines where the authentication type is local, and change the authentication method from peer to scram-sha-256. For example, change:

      /etc/postgresql/10/main/pg_hba.conf

      ...
      local   all             postgres                                peer
      ...
      

      to:

      /etc/postgresql/10/main/pg_hba.conf

      ...
      local   all             postgres                                scram-sha-256
      ...
      

      At the end, your pg_hba.conf configuration will resemble the following:

      /etc/postgresql/10/main/pg_hba.conf

      ...
      local   all             postgres                                scram-sha-256
      
      # TYPE  DATABASE        USER            ADDRESS                 METHOD
      
      # "local" is for Unix domain socket connections only
      local   all             all                                     scram-sha-256
      # IPv4 local connections:
      host    all             all             127.0.0.1/32            scram-sha-256
      # IPv6 local connections:
      host    all             all             ::1/128                 scram-sha-256
      # Allow replication connections from localhost, by a user with the
      # replication privilege.
      local   replication     all                                     scram-sha-256
      host    replication     all             127.0.0.1/32            scram-sha-256
      host    replication     all             ::1/128                 scram-sha-256
      ...
      

      Save and exit the configuration file. Then restart the PostgreSQL service:

      • sudo service postgresql@10-main restart

      For this test, you'll use the postgres_hba_conf resource. This resource is used to test the client authentication data defined in the pg_hba.conf file. You'll pass in the path of your pg_hba.conf file as a parameter to this resource.

      Your control will consist of two describe blocks that check the auth_method fields for both local and host clients respectively to ensure that they are both equal to scram-sha-256. Open controls/postgresql.rb in your text editor:

      • nano controls/postgresql.rb

      Append the following control to the end of the test file controls/postgresql.rb:

      controls/postgresql.rb

      POSTGRES_HBA_CONF_FILE = File.join(POSTGRES_CONF_DIR, 'pg_hba.conf')
      
      control '4-postgres_hba' do
        impact 1.0
        title 'Require SCRAM-SHA-256 for ALL users, peers in pg_hba.conf'
        desc 'Require SCRAM-SHA-256 for ALL users, peers in pg_hba.conf. Do not allow untrusted authentication methods.'
      
        describe postgres_hba_conf(POSTGRES_HBA_CONF_FILE).where { type == 'local' } do
          its('auth_method') { should all eq 'scram-sha-256' }
        end
      
        describe postgres_hba_conf(POSTGRES_HBA_CONF_FILE).where { type == 'host' } do
          its('auth_method') { should all eq 'scram-sha-256' }
        end
      end
      

      In this code block, you define a new variable POSTGRES_HBA_CONF_FILE to store the absolute location of your pg_hba.conf file. File.join is a Ruby method to concatenate two file path segments with /. You use it here to join the POSTGRES_CONF_DIR variable, declared in the previous section, with the PostgreSQL configuration file pg_hba.conf. This will produce an absolute file path of the pg_hba.conf file and store it in the POSTGRES_HBA_CONF_FILE variable.

      After that, you declare and configure the control and its metadata. The first describe block checks that all configuration entries where the client type is local also have scram-sha-256 as their authentication methods. The second describe block does the same for cases where the client type is host. Save and exit controls/postgresql.rb.

      You'll execute this control as the postgres user because Read access to the PostgreSQL HBA configuration is granted only to Owner and Group, which is the postgres user. Execute the profile by running:

      • sudo -u postgres inspec exec .

      Your output will resemble the following:

      Output

      Profile: InSpec Profile (PostgreSQL) Version: 0.1.0 Target: local:// ✔ 1-audit_installation: Audit PostgreSQL Installation ✔ System Package postgresql-10 should be installed ✔ System Package postgresql-10 version should cmp >= "10" ✔ Service postgresql@10-main should be enabled ✔ Service postgresql@10-main should be installed ✔ Service postgresql@10-main should be running ✔ 2-audit_address_port: Audit Process and Port ✔ Port 5432 should be listening ✔ Port 5432 addresses should include "127.0.0.1" ✔ Port 5432 protocols should cmp == "tcp" ✔ Processes postgres should exist ✔ Processes postgres users should include "postgres" ✔ User postgres should exist ✔ 3-postgresql: Audit PostgreSQL Configuration ✔ PostgreSQL Configuration port should eq "5432" ✔ PostgreSQL Configuration password_encryption should eq "scram-sha-256" ✔ PostgreSQL Configuration ssl should eq "on" ✔ PostgreSQL Configuration logging_collector should eq "on" ✔ PostgreSQL Configuration log_connections should eq "on" ✔ PostgreSQL Configuration log_disconnections should eq "on" ✔ PostgreSQL Configuration log_duration should eq "on" ✔ 4-postgres_hba: Require SCRAM-SHA-256 for ALL users, peers in pg_hba.conf ✔ Postgres Hba Config /etc/postgresql/10/main/pg_hba.conf with type == "local" auth_method should all eq "scram-sha-256" ✔ Postgres Hba Config /etc/postgresql/10/main/pg_hba.conf with type == "host" auth_method should all eq "scram-sha-256" Profile Summary: 4 successful controls, 0 control failures, 0 controls skipped Test Summary: 20 successful, 0 failures, 0 skipped

      This output indicates that the new control you added, together with all of the previous controls, are successful. It also indicates that all the tests in your profile are successful.

      In this step, you have added a control to your profile that successfully audited your PostgreSQL client authentication configuration to ensure that all clients are authenticated via scram-sha-256 using the postgres_hba_conf resource.

      Conclusion

      You've set up InSpec and successfully audited a PostgreSQL 10 installation. In the process, you've used a selection of InSpec tools, such as: the InSpec DSL, matchers, resources, profiles, attributes, and the CLI. From here, you can incorporate other resources that InSpec provides in the Resources section of their documentation. InSpec also provides a mechanism for defining custom resources for your specific needs. These custom resources are written as a regular Ruby class.

      You can also explore the Compliance Profiles section of the Chef supermarket that contains publicly shared InSpec profiles that you can execute directly or extend in your own profiles. You can also share your own profiles with the general public in the Chef Supermarket.

      You can go further by exploring other tools in the Chef universe such as Chef and Habitat. InSpec is integrated with Habitat and this provides the ability to ship your compliance controls together with your Habitat-packaged applications and continuously run them. You can explore official and community InSpec tutorials on the tutorials page. For more advanced InSpec references, check the official InSpec documentation.



      Source link

      Understanding Database Sharding


      Introduction

      Any application or website that sees significant growth will eventually need to scale in order to accommodate increases in traffic. For data-driven applications and websites, it’s critical that scaling is done in a way that ensures the security and integrity of their data. It can be difficult to predict how popular a website or application will become or how long it will maintain that popularity, which is why some organizations choose a database architecture that allows them to scale their databases dynamically.

      In this conceptual article, we will discuss one such database architecture: sharded databases. Sharding has been receiving lots of attention in recent years, but many don’t have a clear understanding of what it is or the scenarios in which it might make sense to shard a database. We will go over what sharding is, some of its main benefits and drawbacks, and also a few common sharding methods.

      What is Sharding?

      Sharding is a database architecture pattern related to horizontal partitioning — the practice of separating one table’s rows into multiple different tables, known as partitions. Each partition has the same schema and columns, but also entirely different rows. Likewise, the data held in each is unique and independent of the data held in other partitions.

      It can be helpful to think of horizontal partitioning in terms of how it relates to vertical partitioning. In a vertically-partitioned table, entire columns are separated out and put into new, distinct tables. The data held within one vertical partition is independent from the data in all the others, and each holds both distinct rows and columns. The following diagram illustrates how a table could be partitioned both horizontally and vertically:

      Example tables showing horizontal and vertical partitioning

      Sharding involves breaking up one’s data into two or more smaller chunks, called logical shards. The logical shards are then distributed across separate database nodes, referred to as physical shards, which can hold multiple logical shards. Despite this, the data held within all the shards collectively represent an entire logical dataset.

      Database shards exemplify a shared-nothing architecture. This means that the shards are autonomous; they don’t share any of the same data or computing resources. In some cases, though, it may make sense to replicate certain tables into each shard to serve as reference tables. For example, let’s say there’s a database for an application that depends on fixed conversion rates for weight measurements. By replicating a table containing the necessary conversion rate data into each shard, it would help to ensure that all of the data required for queries is held in every shard.

      Oftentimes, sharding is implemented at the application level, meaning that the application includes code that defines which shard to transmit reads and writes to. However, some database management systems have sharding capabilities built in, allowing you to implement sharding directly at the database level.

      Given this general overview of sharding, let’s go over some of the positives and negatives associated with this database architecture.

      Benefits of Sharding

      The main appeal of sharding a database is that it can help to facilitate horizontal scaling, also known as scaling out. Horizontal scaling is the practice of adding more machines to an existing stack in order to spread out the load and allow for more traffic and faster processing. This is often contrasted with vertical scaling, otherwise known as scaling up, which involves upgrading the hardware of an existing server, usually by adding more RAM or CPU.

      It’s relatively simple to have a relational database running on a single machine and scale it up as necessary by upgrading its computing resources. Ultimately, though, any non-distributed database will be limited in terms of storage and compute power, so having the freedom to scale horizontally makes your setup far more flexible.

      Another reason why some might choose a sharded database architecture is to speed up query response times. When you submit a query on a database that hasn’t been sharded, it may have to search every row in the table you’re querying before it can find the result set you’re looking for. For an application with a large, monolithic database, queries can become prohibitively slow. By sharding one table into multiple, though, queries have to go over fewer rows and their result sets are returned much more quickly.

      Sharding can also help to make an application more reliable by mitigating the impact of outages. If your application or website relies on an unsharded database, an outage has the potential to make the entire application unavailable. With a sharded database, though, an outage is likely to affect only a single shard. Even though this might make some parts of the application or website unavailable to some users, the overall impact would still be less than if the entire database crashed.

      Drawbacks of Sharding

      While sharding a database can make scaling easier and improve performance, it can also impose certain limitations. Here, we’ll discuss some of these and why they might be reasons to avoid sharding altogether.

      The first difficulty that people encounter with sharding is the sheer complexity of properly implementing a sharded database architecture. If done incorrectly, there’s a significant risk that the sharding process can lead to lost data or corrupted tables. Even when done correctly, though, sharding is likely to have a major impact on your team’s workflows. Rather than accessing and managing one’s data from a single entry point, users must manage data across multiple shard locations, which could potentially be disruptive to some teams.

      One problem that users sometimes encounter after having sharded a database is that the shards eventually become unbalanced. By way of example, let’s say you have a database with two separate shards, one for customers whose last names begin with letters A through M and another for those whose names begin with the letters N through Z. However, your application serves an inordinate amount of people whose last names start with the letter G. Accordingly, the A-M shard gradually accrues more data than the N-Z one, causing the application to slow down and stall out for a significant portion of your users. The A-M shard has become what is known as a database hotspot. In this case, any benefits of sharding the database are canceled out by the slowdowns and crashes. The database would likely need to be repaired and resharded to allow for a more even data distribution.

      Another major drawback is that once a database has been sharded, it can be very difficult to return it to its unsharded architecture. Any backups of the database made before it was sharded won’t include data written since the partitioning. Consequently, rebuilding the original unsharded architecture would require merging the new partitioned data with the old backups or, alternatively, transforming the partitioned DB back into a single DB, both of which would be costly and time consuming endeavors.

      A final disadvantage to consider is that sharding isn’t natively supported by every database engine. For instance, PostgreSQL does not include automatic sharding as a feature, although it is possible to manually shard a PostgreSQL database. There are a number of Postgres forks that do include automatic sharding, but these often trail behind the latest PostgreSQL release and lack certain other features. Some specialized database technologies — like MySQL Cluster or certain database-as-a-service products like MongoDB Atlas — do include auto-sharding as a feature, but vanilla versions of these database management systems do not. Because of this, sharding often requires a “roll your own” approach. This means that documentation for sharding or tips for troubleshooting problems are often difficult to find.

      These are, of course, only some general issues to consider before sharding. There may be many more potential drawbacks to sharding a database depending on its use case.

      Now that we’ve covered a few of sharding’s drawbacks and benefits, we will go over a few different architectures for sharded databases.

      Sharding Architectures

      Once you’ve decided to shard your database, the next thing you need to figure out is how you’ll go about doing so. When running queries or distributing incoming data to sharded tables or databases, it’s crucial that it goes to the correct shard. Otherwise, it could result in lost data or painfully slow queries. In this section, we’ll go over a few common sharding architectures, each of which uses a slightly different process to distribute data across shards.

      Key Based Sharding

      Key based sharding, also known as hash based sharding, involves using a value taken from newly written data — such as a customer’s ID number, a client application’s IP address, a ZIP code, etc. — and plugging it into a hash function to determine which shard the data should go to. A hash function is a function that takes as input a piece of data (for example, a customer email) and outputs a discrete value, known as a hash value. In the case of sharding, the hash value is a shard ID used to determine which shard the incoming data will be stored on. Altogether, the process looks like this:

      Key based sharding example diagram

      To ensure that entries are placed in the correct shards and in a consistent manner, the values entered into the hash function should all come from the same column. This column is known as a shard key. In simple terms, shard keys are similar to primary keys in that both are columns which are used to establish a unique identifier for individual rows. Broadly speaking, a shard key should be static, meaning it shouldn’t contain values that might change over time. Otherwise, it would increase the amount of work that goes into update operations, and could slow down performance.

      While key based sharding is a fairly common sharding architecture, it can make things tricky when trying to dynamically add or remove additional servers to a database. As you add servers, each one will need a corresponding hash value and many of your existing entries, if not all of them, will need to be remapped to their new, correct hash value and then migrated to the appropriate server. As you begin rebalancing the data, neither the new nor the old hashing functions will be valid. Consequently, your server won’t be able to write any new data during the migration and your application could be subject to downtime.

      The main appeal of this strategy is that it can be used to evenly distribute data so as to prevent hotspots. Also, because it distributes data algorithmically, there’s no need to maintain a map of where all the data is located, as is necessary with other strategies like range or directory based sharding.

      Range Based Sharding

      Range based sharding involves sharding data based on ranges of a given value. To illustrate, let’s say you have a database that stores information about all the products within a retailer’s catalog. You could create a few different shards and divvy up each products’ information based on which price range they fall into, like this:

      Range based sharding example diagram

      The main benefit of range based sharding is that it’s relatively simple to implement. Every shard holds a different set of data but they all have an identical schema as one another, as well as the original database. The application code just reads which range the data falls into and writes it to the corresponding shard.

      On the other hand, range based sharding doesn’t protect data from being unevenly distributed, leading to the aforementioned database hotspots. Looking at the example diagram, even if each shard holds an equal amount of data the odds are that specific products will receive more attention than others. Their respective shards will, in turn, receive a disproportionate number of reads.

      Directory Based Sharding

      To implement directory based sharding, one must create and maintain a lookup table that uses a shard key to keep track of which shard holds which data. In a nutshell, a lookup table is a table that holds a static set of information about where specific data can be found. The following diagram shows a simplistic example of directory based sharding:

      Directory based sharding example diagram

      Here, the Delivery Zone column is defined as a shard key. Data from the shard key is written to the lookup table along with whatever shard each respective row should be written to. This is similar to range based sharding, but instead of determining which range the shard key’s data falls into, each key is tied to its own specific shard. Directory based sharding is a good choice over range based sharding in cases where the shard key has a low cardinality and it doesn’t make sense for a shard to store a range of keys. Note that it’s also distinct from key based sharding in that it doesn’t process the shard key through a hash function; it just checks the key against a lookup table to see where the data needs to be written.

      The main appeal of directory based sharding is its flexibility. Range based sharding architectures limit you to specifying ranges of values, while key based ones limit you to using a fixed hash function which, as mentioned previously, can be exceedingly difficult to change later on. Directory based sharding, on the other hand, allows you to use whatever system or algorithm you want to assign data entries to shards, and it’s relatively easy dynamically add shards using this approach.

      While directory based sharding is the most flexible of the sharding methods discussed here, the need to connect to the lookup table before every query or write can have a detrimental impact on an application’s performance. Furthermore, the lookup table can become a single point of failure: if it becomes corrupted or otherwise fails, it can impact one’s ability to write new data or access their existing data.

      Should I Shard?

      Whether or not one should implement a sharded database architecture is almost always a matter of debate. Some see sharding as an inevitable outcome for databases that reach a certain size, while others see it as a headache that should be avoided unless it’s absolutely necessary, due to the operational complexity that sharding adds.

      Because of this added complexity, sharding is usually only performed when dealing with very large amounts of data. Here are some common scenarios where it may be beneficial to shard a database:

      • The amount of application data grows to exceed the storage capacity of a single database node.
      • The volume of writes or reads to the database surpasses what a single node or its read replicas can handle, resulting in slowed response times or timeouts.
      • The network bandwidth required by the application outpaces the bandwidth available to a single database node and any read replicas, resulting in slowed response times or timeouts.

      Before sharding, you should exhaust all other options for optimizing your database. Some optimizations you might want to consider include:

      • Setting up a remote database. If you’re working with a monolithic application in which all of its components reside on the same server, you can improve your database’s performance by moving it over to its own machine. This doesn’t add as much complexity as sharding since the database’s tables remain intact. However, it still allows you to vertically scale your database apart from the rest of your infrastructure.
      • Implementing caching. If your application’s read performance is what’s causing you trouble, caching is one strategy that can help to improve it. Caching involves temporarily storing data that has already been requested in memory, allowing you to access it much more quickly later on.
      • Creating one or more read replicas. Another strategy that can help to improve read performance, this involves copying the data from one database server (the primary server) over to one or more secondary servers. Following this, every new write goes to the primary before being copied over to the secondaries, while reads are made exclusively to the secondary servers. Distributing reads and writes like this keeps any one machine from taking on too much of the load, helping to prevent slowdowns and crashes. Note that creating read replicas involves more computing resources and thus costs more money, which could be a significant constraint for some.
      • Upgrading to a larger server. In most cases, scaling up one’s database server to a machine with more resources requires less effort than sharding. As with creating read replicas, an upgraded server with more resources will likely cost more money. Accordingly, you should only go through with resizing if it truly ends up being your best option.

      Bear in mind that if your application or website grows past a certain point, none of these strategies will be enough to improve performance on their own. In such cases, sharding may indeed be the best option for you.

      Conclusion

      Sharding can be a great solution for those looking to scale their database horizontally. However, it also adds a great deal of complexity and creates more potential failure points for your application. Sharding may be necessary for some, but the time and resources needed to create and maintain a sharded architecture could outweigh the benefits for others.

      By reading this conceptual article, you should have a clearer understanding of the pros and cons of sharding. Moving forward, you can use this insight to make a more informed decision about whether or not a sharded database architecture is right for your application.



      Source link

      How To Set Up a Remote Database to Optimize Site Performance with MySQL on Ubuntu 18.04


      Introduction

      As your application or website grows, there may come a point where you’ve outgrown your current server setup. If you are hosting your web server and database backend on the same machine, it may be a good idea to separate these two functions so that each can operate on its own hardware and share the load of responding to your visitors’ requests.

      In this guide, we’ll go over how to configure a remote MySQL database server that your web application can connect to. We will use WordPress as an example in order to have something to work with, but the technique is widely applicable to any application backed by MySQL.

      Prerequisites

      Before beginning this tutorial, you will need:

      • Two Ubuntu 18.04 servers. Each should have a non-root user with sudo privileges and a UFW firewall enabled, as described in our Initial Server Setup with Ubuntu 18.04 tutorial. One of these servers will host your MySQL backend, and throughout this guide we will refer to it as the database server. The other will connect to your database server remotely and act as your web server; likewise, we will refer to it as the web server over the course of this guide.
      • Nginx and PHP installed on your web server. Our tutorial How To Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 18.04 will guide you through the process, but note that you should skip Step 2 of this tutorial, which focuses on installing MySQL, as you will install MySQL on your database server.
      • MySQL installed on your database server. Follow “How To Install MySQL on Ubuntu 18.04” to set this up.
      • Optionally (but strongly recommended), TLS/SSL certificates from Let’s Encrypt installed on your web server. You’ll need to purchase a domain name and have DNS records set up for your server, but the certificates themselves are free. Our guide How To Secure Nginx with Let’s Encrypt on Ubuntu 18.04 will show you how to obtain these certificates.

      Step 1 — Configuring MySQL to Listen for Remote Connections

      Having one’s data stored on a separate server is a good way to expand gracefully after hitting the performance ceiling of a one-machine configuration. It also provides the basic structure necessary to load balance and expand your infrastructure even more at a later time. After installing MySQL by following the prerequisite tutorial, you’ll need to change some configuration values to allow connections from other computers.

      Most of the MySQL server’s configuration changes can be made in the mysqld.cnf file, which is stored in the /etc/mysql/mysql.conf.d/ directory by default. Open up this file with root privileges in your preferred editor. Here, we’ll use nano:

      • sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

      This file is divided into sections denoted by labels in square brackets ([ and ]). Find the section labeled mysqld:

      /etc/mysql/mysql.conf.d/mysqld.cnf

      . . .
      [mysqld]
      . . .
      

      Within this section, look for a parameter called bind-address. This tells the database software which network address to listen for connections on.

      By default, this is set to 127.0.0.1, meaning that MySQL is configured to only look for local connections. You need to change this to reference an external IP address where your server can be reached.

      If both of your servers are in a datacenter with private networking capabilities, use your database server’s private network IP. Otherwise, you can use its public IP address:

      /etc/mysql/mysql.conf.d/mysqld.cnf

      [mysqld]
      . . .
      bind-address = db_server_ip
      

      Because you’ll connect to your database over the internet, it’s recommended that you require encrypted connections to keep your data secure. If you don’t encrypt your MySQL connection, anybody on the network could sniff sensitive information between your web and database servers. To encrypt MySQL connections, add the following line after the bind-address line you just updated:

      /etc/mysql/mysql.conf.d/mysqld.cnf

      [mysqld]
      . . .
      require_secure_transport = on
      . . .
      

      Save and close the file when you are finished. If you’re using nano, do this by pressing CTRL+X, Y, and then ENTER.

      For SSL connections to work, you will need to create some keys and certificates. MySQL comes with a command that will automatically set these up. Run the following command, which creates the necessary files. It also makes them readable by the MySQL server by specifying the UID of the mysql user:

      • sudo mysql_ssl_rsa_setup --uid=mysql

      To force MySQL to update its configuration and read the new SSL information, restart the database:

      • sudo systemctl restart mysql

      To confirm that the server is now listening on the external interface, run the following netstat command:

      • sudo netstat -plunt | grep mysqld

      Output

      tcp 0 0 db_server_ip:3306 0.0.0.0:* LISTEN 27328/mysqld

      netstat prints statistics about your server’s networking system. This output shows us that a process called mysqld is attached to the db_server_ip at port 3306, the standard MySQL port, confirming that the server is listening on the appropriate interface.

      Next, open up that port on the firewall to allow traffic through:

      Those are all the configuration changes you need to make to MySQL. Next, we will go over how to set up a database and some user profiles, one of which you will use to access the server remotely.

      Step 2 — Setting Up a WordPress Database and Remote Credentials

      Even though MySQL itself is now listening on an external IP address, there are currently no remote-enabled users or databases configured. Let's create a database for WordPress, and a pair of users that can access it.

      Begin by connecting to MySQL as the root MySQL user:

      Note: If you have password authentication enabled, as described in Step 3 of the prerequisite MySQL tutorial, you will instead need to use the following command to access the MySQL shell:

      After running this command, you will be asked for your MySQL root password and, after entering it, you'll be given a new mysql> prompt.

      From the MySQL prompt, create a database that WordPress will use. It may be helpful to give this database a recognizable name so that you can easily identify it later on. Here, we will name it wordpress:

      • CREATE DATABASE wordpress;

      Now that you've created your database, you next need to create a pair of users. We will create a local-only user as well as a remote user tied to the web server’s IP address.

      First, create your local user, wordpressuser, and make this account only match local connection attempts by using localhost in the declaration:

      • CREATE USER 'wordpressuser'@'localhost' IDENTIFIED BY 'password';

      Then grant this account full access to the wordpress database:

      • GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpressuser'@'localhost';

      This user can now do any operation on the database for WordPress, but this account cannot be used remotely, as it only matches connections from the local machine. With this in mind, create a companion account that will match connections exclusively from your web server. For this, you'll need your web server's IP address.

      Please note that you must use an IP address that utilizes the same network that you configured in your mysqld.cnf file. This means that if you specified a private networking IP in the mysqld.cnf file, you'll need to include the private IP of your web server in the following two commands. If you configured MySQL to use the public internet, you should match that with the web server's public IP address.

      • CREATE USER 'wordpressuser'@'web-server_ip' IDENTIFIED BY 'password';

      After creating your remote account, give it the same privileges as your local user:

      • GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpressuser'@'web_server_ip';

      Lastly, flush the privileges so MySQL knows to begin using them:

      Then exit the MySQL prompt by typing:

      Now that you've set up a new database and a remote-enabled user, you can move on to testing whether you're able to connect to the database from your web server.

      Step 3 — Testing Remote and Local Connections

      Before continuing, it's best to verify that you can connect to your database from both the local machine — your database server — and from your web server with each of the wordpressuser accounts.

      First, test the local connection from your database server by attempting to log in with your new account:

      • mysql -u wordpressuser -p

      When prompted, enter the password that you set up for this account.

      If you are given a MySQL prompt, then the local connection was successful. You can exit out again by typing:

      Next, log into your web server to test remote connections:

      You'll need to install some client tools for MySQL on your web server in order to access the remote database. First, update your local package cache if you haven't done so recently:

      Then install the MySQL client utilities:

      • sudo apt install mysql-client

      Following this, connect to your database server using the following syntax:

      • mysql -u wordpressuser -h db_server_ip -p

      Again, you must make sure that you are using the correct IP address for the database server. If you configured MySQL to listen on the private network, enter your database's private network IP. Otherwise, enter your database server's public IP address.

      You will be asked for the password for your wordpressuser account. After entering it, and if everything is working as expected, you will see the MySQL prompt. Verify that the connection is using SSL with the following command:

      If the connection is indeed using SSL, the SSL: line will indicate this, as shown here:

      Output

      -------------- mysql Ver 14.14 Distrib 5.7.18, for Linux (x86_64) using EditLine wrapper Connection id: 52 Current database: Current user: wordpressuser@203.0.113.111 SSL: Cipher in use is DHE-RSA-AES256-SHA Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 5.7.18-0ubuntu0.16.04.1 (Ubuntu) Protocol version: 10 Connection: 203.0.113.111 via TCP/IP Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 TCP port: 3306 Uptime: 3 hours 43 min 40 sec Threads: 1 Questions: 1858 Slow queries: 0 Opens: 276 Flush tables: 1 Open tables: 184 Queries per second avg: 0.138 --------------

      After verifying that you can connect remotely, go ahead and exit the prompt:

      With that, you've verified local access and access from the web server, but you have not verified that other connections will be refused. For an additional check, try doing the same thing from a third server for which you did not configure a specific user account in order to make sure that this other server is not granted access.

      Note that before running the following command to attempt the connection, you may have to install the MySQL client utilities as you did above:

      • mysql -u wordpressuser -h db_server_ip -p

      This should not complete successfully, and should throw back an error that looks similar to this:

      Output

      ERROR 1130 (HY000): Host '203.0.113.12' is not allowed to connect to this MySQL server

      This is expected, since you haven't created a MySQL user that's allowed to connect from this server, and also desired, since you want to be sure that your database server will deny unauthorized users access to your MySQL server.

      After successfully testing your remote connection, you can proceed to installing WordPress on your web server.

      Step 4 — Installing WordPress

      To demonstrate the capabilities of your new remote-capable MySQL server, we will go through the process of installing and configuring WordPress — the popular content management system — on your web server. This will require you to download and extract the software, configure your connection information, and then run through WordPress's web-based installation.

      On your web server, download the latest release of WordPress to your home directory:

      • cd ~
      • curl -O https://wordpress.org/latest.tar.gz

      Extract the files, which will create a directory called wordpress in your home directory:

      WordPress includes a sample configuration file which we'll use as a starting point. Make a copy of this file, removing -sample from the filename so it will be loaded by WordPress:

      • cp ~/wordpress/wp-config-sample.php ~/wordpress/wp-config.php

      When you open the file, your first order of business will be to adjust some secret keys to provide more security to your installation. WordPress provides a secure generator for these values so that you do not have to try to come up with good values on your own. These are only used internally, so it won't hurt usability to have complex, secure values here.

      To grab secure values from the WordPress secret key generator, type:

      • curl -s https://api.wordpress.org/secret-key/1.1/salt/

      This will print some keys to your output. You will add these to your wp-config.php file momentarily:

      Warning! It is important that you request your own unique values each time. Do not copy the values shown here!

      Output

      define('AUTH_KEY', 'L4|2Yh(giOtMLHg3#] DO NOT COPY THESE VALUES %G00o|te^5YG@)'); define('SECURE_AUTH_KEY', 'DCs-k+MwB90/-E(=!/ DO NOT COPY THESE VALUES +WBzDq:7U[#Wn9'); define('LOGGED_IN_KEY', '*0kP!|VS.K=;#fPMlO DO NOT COPY THESE VALUES +&[%8xF*,18c @'); define('NONCE_KEY', 'fmFPF?UJi&(j-{8=$- DO NOT COPY THESE VALUES CCZ?Q+_~1ZU~;G'); define('AUTH_SALT', '@qA7f}2utTEFNdnbEa DO NOT COPY THESE VALUES t}Vw+8=K%20s=a'); define('SECURE_AUTH_SALT', '%BW6s+d:7K?-`C%zw4 DO NOT COPY THESE VALUES 70U}PO1ejW+7|8'); define('LOGGED_IN_SALT', '-l>F:-dbcWof%4kKmj DO NOT COPY THESE VALUES 8Ypslin3~d|wLD'); define('NONCE_SALT', '4J(<`4&&F (WiK9K#] DO NOT COPY THESE VALUES ^ZikS`es#Fo:V6');

      Copy the output you received to your clipboard, then open the configuration file in your text editor:

      • nano ~/wordpress/wp-config.php

      Find the section that contains the dummy values for those settings. It will look something like this:

      /wordpress/wp-config.php

      . . .
      define('AUTH_KEY',         'put your unique phrase here');
      define('SECURE_AUTH_KEY',  'put your unique phrase here');
      define('LOGGED_IN_KEY',    'put your unique phrase here');
      define('NONCE_KEY',        'put your unique phrase here');
      define('AUTH_SALT',        'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT',   'put your unique phrase here');
      define('NONCE_SALT',       'put your unique phrase here');
      . . .
      

      Delete those lines and paste in the values you copied from the command line.

      Next, enter the connection information for your remote database. These configuration lines are at the top of the file, just above where you pasted in your keys. Remember to use the same IP address you used in your remote database test earlier:

      /wordpress/wp-config.php

      . . .
      /** The name of the database for WordPress */
      define('DB_NAME', 'wordpress');
      
      /** MySQL database username */
      define('DB_USER', 'wordpressuser');
      
      /** MySQL database password */
      define('DB_PASSWORD', 'password');
      
      /** MySQL hostname */
      define('DB_HOST', 'db_server_ip');
      . . .
      

      And finally, anywhere in the file, add the following line which tells WordPress to use an SSL connection to our MySQL database:

      /wordpress/wp-config.php

      define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL);
      

      Save and close the file.

      Next, copy the files and directories found in your ~/wordpress directory to Nginx's document root. Note that this command includes the -a flag to make sure all the existing permissions are carried over:

      • sudo cp -a ~/wordpress/* /var/www/html

      After this, the only thing left to do is modify the file ownership. Change the ownership of all the files in the document root over to www-data, Ubuntu's default web server user:

      • sudo chown -R www-data:www-data /var/www/html

      With that, WordPress is installed and you're ready to run through its web-based setup routine.

      Step 5 — Setting Up WordPress Through the Web Interface

      WordPress has a web-based setup process. As you go through it, it will ask a few questions and install all the tables it needs in your database. Here, we will go over the initial steps of setting up WordPress, which you can use as a starting point for building your own custom website that uses a remote database backend.

      Navigate to the domain name (or public IP address) associated with your web server:

      http://example.com
      

      You will see a language selection screen for the WordPress installer. Select the appropriate language and click through to the main installation screen:

      WordPress install screen

      Once you have submitted your information, you will need to log into the WordPress admin interface using the account you just created. You will then be taken to a dashboard where you can customize your new WordPress site.

      Conclusion

      By following this tutorial, you've set up a MySQL database to accept SSL-protected connections from a remote WordPress installation. The commands and techniques used in this guide are applicable to any web application written in any programming language, but the specific implementation details will differ. Refer to your application or language's database documentation for more information.



      Source link