One place for hosting & domains

      Ubuntu

      How To Install WordPress with a Managed Database on Ubuntu 18.04


      A previous version of this tutorial was written by Justin Ellingwood

      Introduction

      WordPress is the most popular CMS (content management system) on the internet. It’s a great choice for getting a website up and running quickly, and after the initial setup, almost all administration can be done through the web frontend.

      WordPress is designed to pull content – including posts, comments, user profiles, and other data – from a database backend. As a website grows and must satisfy more and more traffic, it can eventually outgrow its initial database. To resolve this, one can scale up their database by migrating their data to a machine with more RAM or CPU, but this is a tedious process that runs the risk of data loss or corruption. This is why some WordPress developers choose to build their websites on managed databases, which allow users to scale their database automatically with a far lower risk of data loss.

      In this guide, we’ll focus on setting up a WordPress instance with a managed MySQL database and an Ubuntu 18.04 server. This will require you to install PHP and Apache to serve the content over the web.

      Prerequisites

      In order to complete this tutorial, you will need:

      • Access to an Ubuntu 18.04 server: This server should have a non-root sudo-enabled user and a firewall configured. You can set this up by following our Ubuntu 18.04 initial server setup guide.
      • A managed MySQL database: To provision a Managed MySQL Database from DigitalOcean, see our Managed Databases product documentation. Note that this guide will refer to DigitalOcean Managed Databases in examples, but the instructions provided here should also generally work for managed MySQL databases from other cloud providers.
      • A LAMP stack installed on your server: In addition to a database, WordPress requires a web server and PHP to function correctly. Setting up a complete LAMP stack (Linux, Apache, MySQL, and PHP) fulfills all of these requirements. Follow this guide to install and configure this software. As you follow this guide, make sure that you set up a virtual host to point to a domain name that you own. Additionally, be sure to skip Step 2, as installing mysql-server on your machine will make your managed database instance redundant.
      • TLS/SSL security implemented for your site: If you have a domain name, the easiest way to secure your site is with Let’s Encrypt, which provides free, trusted certificates. Follow our Let’s Encrypt guide for Apache to set this up. Note that this will also require you to obtain a domain name and set up DNS records on your server. Follow this introduction to DigitalOcean DNS for details on how to configure this. Altneratively, if you don’t have a domain name, you use a self-signed certificate for your site.

      When you are finished with the setup steps, log into your server as your non-root user and continue below.

      Step 1 – Adding the MySQL Software Repository and Installing mysql-client

      In order to configure your managed MySQL instance, you will need to install a client that will allow you to access the database from your server. This step will walk you through the process of installing the mysql-client package.

      In many cases, you can just install mysql-client with the apt command, but if you’re using the default Ubuntu repositories this will install version 5.7 of the program. In order to access a DigitalOcean Managed MySQL database, you will need to install version 8.0 or above. To do so, you must first add the MySQL software repository before installing the package.

      Begin by navigating to the MySQL APT Repository page in your web browser. Find the Download button in the lower-right corner and click through to the next page. This page will prompt you to log in or sign up for an Oracle web account. You can skip that and instead look for the link that says No thanks, just start my download. Right-click the link and select Copy Link Address (this option may be worded differently, depending on your browser).

      Now you’re ready to download the file. On your server, move to a directory you can write to:

      Download the file using curl, remembering to paste the address you just copied in place of the highlighted portion of the following command. You also need to pass two command line flags to curl. -O instructs curl to output to a file instead of standard output. The L flag makes curl follow HTTP redirects, which is necessary in this case because the address you copied actually redirects to another location before the file downloads:

      • curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb

      The file should now be downloaded in your current directory. List the files to make sure:

      You will see the filename listed in the output:

      Output

      mysql-apt-config_0.8.13-1_all.deb . . .

      Now you can add the MySQL APT repository to your system's repository list. The dpkg command is used to install, remove, and inspect .deb software packages. The following command includes the -i flag, indicating that you'd like to install from the specified file:

      • sudo dpkg -i mysql-apt-config*

      During the installation, you'll be presented with a configuration screen where you can specify which version of MySQL you'd prefer, along with an option to install repositories for other MySQL-related tools. The defaults will add the repository information for the latest stable version of MySQL and nothing else. This is what we want, so use the down arrow to navigate to the Ok menu option and hit ENTER.

      Selecting mysql-apt-config configuration options

      Following that, the package will finish adding the repository. Refresh your apt package cache to make the new software packages available:

      Next, you can clean up your system a bit and delete the file you downloaded, as you won't need it in the future:

      Note: If you ever need to update the configuration of these repositories, just run the following command to select your new options:

      • sudo dpkg-reconfigure mysql-apt-config

      After selecting your new options, run the following command to refresh your package cache:

      Now that you've added the MySQL repositories, you're ready to install the actual MySQL client software. Do so with the following apt command:

      • sudo apt install mysql-client

      Once that command finishes, check the software version number to ensure that you have the latest release:

      Output

      mysql Ver 8.0.17-cluster for Linux on x86_64 (MySQL Community Server - GPL)

      You're now able to connect to your managed database and begin preparing it to function with WordPress.

      Step 2 – Creating a MySQL Database and User for WordPress

      WordPress uses MySQL to manage and store site and user information. Assuming you have completed all the prerequisite tutorials, you will have already provisioned a managed MySQL instance. Here, we'll take the preparatory step of creating a database and a user for WordPress to use.

      Most managed database providers provide a uniform resource identifier (URI) used for connecting to the database instance. If you're using a DigitalOcean Managed Database, you can find the relevant connection information in your Cloud Control Panel.

      First, click Databases in the left-hand sidebar menu and select the MySQL database you want to use for your WordPress installation. Scroll down to the Connection Details section and copy the link in the host field. Then paste this link into the following command, replacing host_uri with the information you just copied. Likewise, copy the port number in the port field – which will be 25060 on a DigitalOcean Managed Database – and replace port with that number. Additionally, if this is your first time connecting to your managed database and you've not created your own administrative MySQL user, copy the value in the username field and paste it into the command, replacing user:

      • mysql -u user -p -h host_uri -P port

      This command includes the -p flag, which will prompt you for the password of the MySQL user you specified. For a DigitalOcean Managed Database's default doadmin user, you can find this by clicking the show link in the Connection Details section to reveal the password. Copy and paste it into your terminal when prompted.

      Note: If you are not using a DigitalOcean Managed Database, your connection options may differ. If that's the case, you should consult your provider's documentation for instructions on connecting third party applications to your database.

      From the MySQL prompt, create a new database that WordPress will control. You can call this whatever you would like, but we will use the name wordpress in this guide to keep it simple. Create the database for WordPress by typing:

      • CREATE DATABASE wordpress DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;

      Note: Every MySQL statement must end in a semi-colon (;). Check to make sure this is present if you are running into any issues.

      Next, create a new MySQL user account that you will use exclusively to operate on the new database. Creating single-purpose databases and accounts is a good idea from a management and security standpoint. We will use the name wordpressuser in this guide, but feel free to change this if you'd like.

      Run the following command, but replace your_server_ip with your Ubuntu server's IP address. Be aware, though, that this will limit wordpressuser to only be able to connect from your LAMP server; if you plan to manage WordPress from your local computer, you should enter that machine's IP address instead. Additionally, choose a strong password for your database user:

      • CREATE USER 'wordpressuser'@your_server_ip IDENTIFIED WITH mysql_native_password BY 'password';

      Note: If you do not know what your server's public IP address is, there are a number of ways you can find it. Usually, this is the address you use to connect to your server through SSH.

      One method is to use the curl utility to contact an outside party to tell you how it sees your server. For example, you can use curl to contact an IP-checking tool like ICanHazIP:

      • curl http://icanhazip.com

      This command will return your server's public IP address in your output.

      Then grant this user access to the database you just created. Do so by running the following command:

      • GRANT ALL ON wordpress.* TO 'wordpressuser'@your_server_ip;

      You now have a database and user account, each made specifically for WordPress. Next, you need to flush the privileges so that the current MySQL session recognizes the changes you've made:

      Following that, exit out of MySQL by typing:

      That takes care of configuring your managed MySQL database to function with WordPress. In the next step, you will install a few PHP extensions in order to get more functionality out of the CMS.

      Step 3 – Installing Additional PHP Extensions

      Assuming you followed the prerequisite LAMP stack tutorial, you will have installed a few extensions intended to get PHP to properly communicate with MySQL. WordPress and many of its plugins leverage additional PHP extensions to add additional functionalities.

      To download and install some of the more popular PHP extensions for use with WordPress, run the following command:

      • sudo apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip

      Note: Each WordPress plugin has its own set of requirements. Some may require you to install additional PHP packages. Check your plugin documentation to see which extensions it requires. If they are available, they can be installed with apt as demonstrated above.

      You will restart Apache to load these new extensions in the next section. If you're returning here to install additional plugins, though, you can restart Apache now by typing:

      • sudo systemctl restart apache2

      Otherwise, continue on to Step 4.

      Step 4 – Adjusting Apache's Configuration to Allow for .htaccess Overrides and Rewrites

      In order for Apache to be able to properly serve your WordPress installation, you must make a few minor adjustments to your Apache configuration.

      If you followed the prerequisite tutorials, you should already have a configuration file for your site in the /etc/apache2/sites-available/ directory. We'll use /etc/apache2/sites-available/your_domain.conf as an example here, but you should substitute the path to your configuration file where appropriate.

      Additionally, we will use /var/www/your_domain as the root directory in this example WordPress install. You should use the web root specified in your own configuration.

      Note: It's possible you are using the 000-default.conf default configuration (with /var/www/html as your web root). This is fine to use if you're only going to host one website on this server. If not, it's best to split the necessary configuration into logical chunks, one file per site.

      Currently, the use of .htaccess files is disabled. WordPress and many WordPress plugins use these files extensively for in-directory tweaks to the web server's behavior.

      Open the Apache configuration file for your website:

      • sudo nano /etc/apache2/sites-available/your_domain.conf

      To allow .htaccess files, you need to set the AllowOverride directive within a Directory block pointing to your document root. Add the following block of text inside the VirtualHost block in your configuration file, being sure to use the correct web root directory:

      /etc/apache2/sites-available/your_domain.conf

      <Directory /var/www/your_domain>
          AllowOverride All
      </Directory>
      

      When you are finished, save and close the file.

      Next, enable mod_rewrite so that you can employ the WordPress permalink feature:

      Before implementing the changes you've just made, check to make sure there aren't any syntax errors in your configuration file:

      • sudo apache2ctl configtest

      The output might have a message that looks like this:

      Output

      AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message Syntax OK

      If you wish to suppress the top line, just add a ServerName directive to your main (global) Apache configuration file at /etc/apache2/apache2.conf. The ServerName can be your server's domain or IP address. However, this is just a message; it doesn't affect the functionality of your site and as long as the output contains Syntax OK, you're all set to continue.

      Restart Apache to implement the changes:

      • sudo systemctl restart apache2

      With that, you're ready to download and set up WordPress itself.

      Step 5 – Downloading WordPress

      Now that your server software is configured, you can install and configure WordPress. For security reasons, it is always recommended to get the latest version of WordPress from their site.

      First, navigate to into a writable directory. /tmp will work for the purposes of this step:

      Then download the compressed release by typing:

      • curl -O https://wordpress.org/latest.tar.gz

      Extract the compressed file to create the WordPress directory structure:

      You will move these files into your document root momentarily. Before doing so, add a dummy .htaccess file so that this will be available for WordPress to use later.

      Create the file by typing:

      • touch /tmp/wordpress/.htaccess

      Also, copy over the sample configuration file to the filename that WordPress actually reads:

      • cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php

      Create an upgrade directory, so that WordPress won't run into permissions issues when trying to do this on its own following an update to its software:

      • mkdir /tmp/wordpress/wp-content/upgrade

      Then copy the entire contents of the directory into your document root. The following command uses a period at the end of the source directory to indicate that everything within the directory should be copied, including hidden files (like the .htaccess file you just created):

      • sudo cp -a /tmp/wordpress/. /var/www/your_domain

      That takes care of downloading WordPress onto your server. At this point, though, you still won't be able to access the WordPress setup interface in your browser. To fix that, you'll need to make a few changes to your server's WordPress configuration.

      Step 6 – Configuring the WordPress Directory

      Before going through the web-based WordPress setup, you need to adjust some items in your WordPress directory. One important configuration change involves setting up reasonable file permissions and ownership.

      Start by giving ownership of all the files to the www-data user and group. This is the user that the Apache web server runs as on Debian and Ubuntu systems, and Apache will need to be able to read and write WordPress files in order to serve the website and perform automatic updates.

      Update the ownership of your web root directory with chown:

      • sudo chown -R www-data:www-data /var/www/your_domain

      Next run the following two find commands to set the correct permissions on the WordPress directories and files:

      • sudo find /var/www/your_domain/ -type d -exec chmod 750 {} ;
      • sudo find /var/www/your_domain/ -type f -exec chmod 640 {} ;

      These should be a reasonable permissions set to start with. Be aware, though, that some plugins and procedures might require additional updates.

      Now, you need to make some changes to the main WordPress configuration file.

      When you open the file, the first order of business will be to replace some secret keys to provide security for your installation. WordPress provides a secure generator for these values so that you do not have to try to come up with good values on your own. These are only used internally, so it won't hurt usability to have complex, secure values here.

      To grab secure values from the WordPress secret key generator, run the following command:

      • curl -s https://api.wordpress.org/secret-key/1.1/salt/

      You will get back unique values that look something like this:

      Warning! It is important that you request unique values each time. Do NOT copy the values shown here!

      Output

      define('AUTH_KEY', '1jl/vqfs<XhdXoAPz9 DO NOT COPY THESE VALUES c_j{iwqD^<+c9.k<J@4H'); define('SECURE_AUTH_KEY', 'E2N-h2]Dcvp+aS/p7X DO NOT COPY THESE VALUES {Ka(f;rv?Pxf})CgLi-3'); define('LOGGED_IN_KEY', 'W(50,{W^,OPB%PB<JF DO NOT COPY THESE VALUES 2;y&,2m%3]R6DUth[;88'); define('NONCE_KEY', 'll,4UC)7ua+8<!4VM+ DO NOT COPY THESE VALUES #`DXF+[$atzM7 o^-C7g'); define('AUTH_SALT', 'koMrurzOA+|L_lG}kf DO NOT COPY THESE VALUES 07VC*Lj*lD&?3w!BT#-'); define('SECURE_AUTH_SALT', 'p32*p,]z%LZ+pAu:VY DO NOT COPY THESE VALUES C-?y+K0DK_+F|0h{!_xY'); define('LOGGED_IN_SALT', 'i^/G2W7!-1H2OQ+t$3 DO NOT COPY THESE VALUES t6**bRVFSD[Hi])-qS`|'); define('NONCE_SALT', 'Q6]U:K?j4L%Z]}h^q7 DO NOT COPY THESE VALUES 1% ^qUswWgn+6&xqHN&%');

      These are configuration lines that you can paste directly into your configuration file to set secure keys. Copy the output you received now.

      Then, open the WordPress configuration file:

      • sudo nano /var/www/your_domain/wp-config.php

      Find the section that contains the dummy values for those settings. It will look something like this:

      /var/www/your_domain/wp-config.php

      . . .
      
      define('AUTH_KEY',         'put your unique phrase here');
      define('SECURE_AUTH_KEY',  'put your unique phrase here');
      define('LOGGED_IN_KEY',    'put your unique phrase here');
      define('NONCE_KEY',        'put your unique phrase here');
      define('AUTH_SALT',        'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT',   'put your unique phrase here');
      define('NONCE_SALT',       'put your unique phrase here');
      
      . . .
      

      Delete those lines and paste in the values you copied from the command line:

      /var/www/your_domain/wp-config.php

      . . .
      
      define('AUTH_KEY',         'VALUES COPIED FROM THE COMMAND LINE');
      define('SECURE_AUTH_KEY',  'VALUES COPIED FROM THE COMMAND LINE');
      define('LOGGED_IN_KEY',    'VALUES COPIED FROM THE COMMAND LINE');
      define('NONCE_KEY',        'VALUES COPIED FROM THE COMMAND LINE');
      define('AUTH_SALT',        'VALUES COPIED FROM THE COMMAND LINE');
      define('SECURE_AUTH_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
      define('LOGGED_IN_SALT',   'VALUES COPIED FROM THE COMMAND LINE');
      define('NONCE_SALT',       'VALUES COPIED FROM THE COMMAND LINE');
      
      . . .
      

      Next you need to modify some of the database connection settings at the beginning of the file. First, update the 'DB_NAME', 'DB_USER', and 'DB_PASSWORD' fields to point to the database name, database user, and the associated password that you configured within MySQL:

      /var/www/your_domain/wp-config.php

      . . .
      /** The name of the database for WordPress */
      define('DB_NAME', 'wordpress');
      
      /** MySQL database username */
      define('DB_USER', 'wordpressuser');
      
      /** MySQL database password */
      define('DB_PASSWORD', 'password');
      
      . . .
      

      You will also need to replace localhost in the 'DB_HOST' field with your managed database's host. Additionally, append a colon (:) and your database's port number to the host:

      /var/www/wordpress/wp-config.php

      . . .
      
      /** MySQL hostname */
      define( 'DB_HOST', 'managed_database_host:managed_database_port' );
      
      . . .
      

      The last change you need to make is to set the method that WordPress will use to write to the filesystem. Since you've already given the web server permission to write where it needs to, you can explicitly set the filesystem method to direct port. Failure to set this with your current settings would result in WordPress prompting for FTP credentials when you perform certain actions.

      This setting can be added below the database connection settings, or anywhere else in the file:

      /var/www/your_domain/wp-config.php

      . . .
      
      define('FS_METHOD', 'direct');
      . . .
      

      Save and close the file when you are finished.

      After making those changes, you're all set to finish the process of installing WordPress in your web browser. However, there's one more step that we recommend you complete to add an extra layer of security to your configuration.

      At this point, your WordPress installation is communicating with your managed MySQL database. However, there's no guarantee that data transfers between the two machines are secure. In this step, we will configure WordPress to communicate with your MySQL instance over a TLS/SSL connection to ensure secure communications between the two machines.

      To do so, you'll need your managed database's CA certificate. For a DigitalOcean Managed Database, you can find this by once again navigating to the Databases tab in your Control Panel. Click on your database, and find the Connection Details section. There will be a button there that reads Download the CA certificate. Click this button to download the certificate to your local machine.

      Then transfer this file to your WordPress server. If your local machine is running Linux or macOS, you can use a tool like scp:

      • scp /path/to/file/ca-certificate.crt sammy@your_server_ip:/tmp

      If your local machine is running Windows, you can use an alternative tool like WinSCP.

      Once the CA certificate is on your server, move it to the /user/local/share/ca-certificates/ directory, Ubuntu's trusted certificate store:

      • sudo mv /tmp/ca-certificate.crt /usr/local/share/ca-certificates/

      Following this, run the update-ca-certificates command. This program looks for certificates within /usr/local/share/ca-certificates, adds any new ones to the /etc/ssl/certs/ directory, and generates a list of trusted SSL certificates based on its contents:

      • sudo update-ca-certificates

      Then, reopen your wp-config.php file:

      • nano /var/www/your_domain/wp-config.php

      Somewhere in the file, add the following line:

      /var/www/your_domain/wp-config.php

      . . .
      define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL);
      . . .
      

      Save and close the file.

      Following that, WordPress will securely communicate with your managed MySQL database.

      Step 8 – Completing the Installation Through the Web Interface

      Now that the server configuration is complete, you can complete the installation through the WordPress web interface.

      In your web browser, navigate to your server's domain name or public IP address:

      https://server_domain_or_IP
      

      Assuming there aren't any errors in your WordPress or Apache configurations, you'll see the WordPress language selection splash page. Select the language you would like to use:

      WordPress language selection

      After selecting your language, you will see the main setup page.

      Select a name for your WordPress site and choose a username (it is recommended not to choose something like "admin" for security purposes). A strong password is generated automatically. Save this password or enter an alternative strong password.

      Enter your email address and select whether you want to discourage search engines from indexing your site:

      WordPress setup installation

      When you click ahead, you will be taken to a page that prompts you to log in:

      WordPress login prompt

      Once you log in, you will be taken to the WordPress administration dashboard:

      WordPress login prompt

      From here, you can begin customizing your new WordPress site and start publishing content. If this is your first time using WordPress, we encourage you to explore the interface a bit to get acquainted with your new CMS.

      Conclusion

      By completing this guide, you will have WordPress installed and ready to use on your server. Additionally, your WordPress installation is dynamically pulling posts, pages, and other content from your managed MySQL database.

      Some common next steps are to choose the permalinks setting for your posts. This setting can be found under Settings > Permalinks. You could also select a new theme in Appearance > Themes. Once you start loading some content into your site, you could also configure a CDN to speed up your site's asset delivery.



      Source link

      How To Analyze Managed PostgreSQL Database Statistics Using the Elastic Stack on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Database monitoring is the continuous process of systematically tracking various metrics that show how the database is performing. By observing the performance data, you can gain valuable insights and identify possible bottlenecks, as well as find additional ways of improving database performance. Such systems often implement alerting, which notifies administrators when things go wrong. Gathered statistics can be used to not only improve the configuration and workflow of the database, but also those of client applications.

      The benefit of using the Elastic Stack (ELK stack) for monitoring your managed database is its excellent support for searching and the ability to ingest new data very quickly. It does not excel at updating the data, but this trade off is acceptable for monitoring and logging purposes, where past data is almost never changed. Elasticsearch offers powerful means of querying the data, which you can use through Kibana to get a better understanding of how the database fares through different time periods. This will allow you to correlate database load with real-life events to gain insight into how the database is being used.

      In this tutorial, you’ll import database metrics, generated by the PostgreSQL statistics collector, into Elasticsearch via Logstash. This entails configuring Logstash to pull data from the database using the PostgreSQL JDBC connector to send it to Elasticsearch for indexing immediately afterward. The imported data can later be analyzed and visualized in Kibana. Then, if your database is brand new, you’ll use pgbench, a PostgreSQL benchmarking tool, to create more interesting visualizations. In the end, you’ll have an automated system pulling in PostgreSQL statistics for later analysis.

      Prerequisites

      Step 1 — Setting up Logstash and the PostgreSQL JDBC Driver

      In this section, you will install Logstash and download the PostgreSQL JDBC driver so that Logstash will be able to connect to your managed database.

      Start off by installing Logstash with the following command:

      • sudo apt install logstash -y

      Once Logstash is installed, enable the service to automatically start on boot:

      • sudo systemctl enable logstash

      Logstash is written in Java, so in order to connect to PostgreSQL it requires the PostgreSQL JDBC (Java Database Connectivity) library to be available on the system it is running on. Because of an internal limitation, Logstash will properly load the library only if it is found under the /usr/share/logstash/logstash-core/lib/jars directory, where it stores third-party libraries it uses.

      Head over to the download page of the JDBC library and copy the link to latest version. Then, download it using curl by running the following command:

      • sudo curl https://jdbc.postgresql.org/download/postgresql-42.2.6.jar -o /usr/share/logstash/logstash-core/lib/jars/postgresql-jdbc.jar

      At the time of writing, the latest version of the library was 42.2.6, with Java 8 as the supported runtime version. Ensure you download the latest version; pairing it with the correct Java version that both JDBC and Logstash support.

      Logstash stores its configuration files under /etc/logstash/conf.d, and is itself stored under /usr/share/logstash/bin. Before you create a configuration that will pull statistics from your database, you’ll need to enable the JDBC plugin in Logstash by running the following command:

      • sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-jdbc

      You’ve installed Logstash using apt and downloaded the PostgreSQL JDBC library so that Logstash can use it to connect to your managed database. In the next step, you will configure Logstash to pull statistical data from it.

      Step 2 — Configuring Logstash To Pull Statistics

      In this section, you will configure Logstash to pull metrics from your managed PostgreSQL database.

      You’ll configure Logstash to watch over three system databases in PostgreSQL, namely:

      • pg_stat_database: provides statistics about each database, including its name, number of connections, transactions, rollbacks, rows returned by querying the database, deadlocks, and so on. It has a stats_reset field, which specifies when the statistics were last reset.
      • pg_stat_user_tables: provides statistics about each table created by the user, such as the number of inserted, deleted, and updated rows.
      • pg_stat_user_indexes: collects data about all indexes in user-created tables, such as the number of times a particular index has been scanned.

      You’ll store the configuration for indexing PostgreSQL statistics in Elasticsearch in a file named postgresql.conf under the /etc/logstash/conf.d directory, where Logstash stores configuration files. When started as a service, it will automatically run them in the background.

      Create postgresql.conf using your favorite editor (for example, nano):

      • sudo nano /etc/logstash/conf.d/postgresql.conf

      Add the following lines:

      /etc/logstash/conf.d/postgresql.conf

      input {
              # pg_stat_database
              jdbc {
                      jdbc_driver_library => ""
                      jdbc_driver_class => "org.postgresql.Driver"
                      jdbc_connection_string => "jdbc:postgresql://host:port/defaultdb"
                      jdbc_user => "username"
                      jdbc_password => "password"
                      statement => "SELECT * FROM pg_stat_database"
                      schedule => "* * * * *"
                      type => "pg_stat_database"
              }
      
              # pg_stat_user_tables
              jdbc {
                      jdbc_driver_library => ""
                      jdbc_driver_class => "org.postgresql.Driver"
                      jdbc_connection_string => "jdbc:postgresql://host:port/defaultdb"
                      jdbc_user => "username"
                      jdbc_password => "password"
                      statement => "SELECT * FROM pg_stat_user_tables"
                      schedule => "* * * * *"
                      type => "pg_stat_user_tables"
              }
      
              # pg_stat_user_indexes
              jdbc {
                      jdbc_driver_library => ""
                      jdbc_driver_class => "org.postgresql.Driver"
                      jdbc_connection_string => "jdbc:postgresql://host:port/defaultdb"
                      jdbc_user => "username"
                      jdbc_password => "password"
                      statement => "SELECT * FROM pg_stat_user_indexes"
                      schedule => "* * * * *"
                      type => "pg_stat_user_indexes"
              }
      }
      
      output {
              elasticsearch {
                      hosts => "http://localhost:9200"
                      index => "%{type}"
              }
      }
      

      Remember to replace host with your host address, port with the port to which you can connect to your database, username with the database user username, and password with its password. All these values can be found in the Control Panel of your managed database.

      In this configuration, you define three JDBC inputs and one Elasticsearch output. The three inputs pull data from the pg_stat_database, pg_stat_user_tables, and pg_stat_user_indexes databases, respectively. They all set the jdbc_driver_library parameter to an empty string, because the PostgreSQL JDBC library is in a folder that Logstash automatically loads.

      Then, they set the jdbc_driver_class, whose value is specific to the JDBC library, and provide a jdbc_connection_string, which details how to connect to the database. The jdbc: part signifies that it is a JDBC connection, while postgres:// indicates that the target database is PostgreSQL. Next come the host and port of the database, and after the forward slash you also specify a database to connect to; this is because PostgreSQL requires you to be connected to a database to be able to issue any queries. Here, it is set to the default database that always exists and can not be deleted, aptly named defaultdb.

      Next, they set a username and password of the user through which the database will be accessed. The statement parameter contains a SQL query that should return the data you wish to process—in this configuration, it selects all rows from the appropriate database.

      The schedule parameter accepts a string in cron syntax that defines when Logstash should run this input; omitting it completely will make Logstash run it only once. Specifying * * * * *, as you have done so here, will tell Logstash to run it every minute. You can specify your own cron string if you want to collect data at different intervals.

      There is only one output, which accepts data from three inputs. They all send data to Elasticsearch, which is running locally and is reachable at http://localhost:9200. The index parameter defines to which Elasticsearch index it will send the data, and its value is passed in from the type field of the input.

      When you are done with editing, save and close the file.

      You’ve configured Logstash to gather data from various PostgreSQL statistical tables and send them to Elasticsearch for storage and indexing. Next, you’ll run Logstash to test the configuration.

      Step 3 — Testing the Logstash Configuration

      In this section, you will test the configuration by running Logstash to verify it will properly pull the data. Then, you will make this configuration run in the background by configuring it as a Logstash pipeline.

      Logstash supports running a specific configuration by passing its file path to the -f parameter. Run the following command to test your new configuration from the last step:

      • sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/postgresql.conf

      It may take some time before it shows any output, which will look similar to this:

      Output

      Thread.exclusive is deprecated, use Thread::Mutex WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-08-02 18:29:15.123 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-08-02 18:29:15.154 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.3.0"} [INFO ] 2019-08-02 18:29:18.209 [Converge PipelineAction::Create<main>] Reflections - Reflections took 77 ms to scan 1 urls, producing 19 keys and 39 values [INFO ] 2019-08-02 18:29:20.195 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-08-02 18:29:20.667 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-08-02 18:29:21.221 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7} [WARN ] 2019-08-02 18:29:21.230 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [INFO ] 2019-08-02 18:29:21.274 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-08-02 18:29:21.337 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-08-02 18:29:21.369 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-08-02 18:29:21.386 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7} [WARN ] 2019-08-02 18:29:21.386 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [INFO ] 2019-08-02 18:29:21.409 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-08-02 18:29:21.430 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-08-02 18:29:21.444 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-08-02 18:29:21.465 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7} [WARN ] 2019-08-02 18:29:21.466 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [INFO ] 2019-08-02 18:29:21.468 [Ruby-0-Thread-7: :1] elasticsearch - Using default mapping template [INFO ] 2019-08-02 18:29:21.538 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template [INFO ] 2019-08-02 18:29:21.545 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-08-02 18:29:21.589 [Ruby-0-Thread-9: :1] elasticsearch - Using default mapping template [INFO ] 2019-08-02 18:29:21.696 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [INFO ] 2019-08-02 18:29:21.769 [Ruby-0-Thread-7: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [INFO ] 2019-08-02 18:29:21.771 [Ruby-0-Thread-9: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [WARN ] 2019-08-02 18:29:21.871 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team. [INFO ] 2019-08-02 18:29:21.878 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x470bf1ca run>"} [INFO ] 2019-08-02 18:29:22.351 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2019-08-02 18:29:22.721 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-08-02 18:29:23.798 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated [INFO ] 2019-08-02 18:30:02.333 [Ruby-0-Thread-22: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:284] jdbc - (0.042932s) SELECT * FROM pg_stat_user_indexes [INFO ] 2019-08-02 18:30:02.340 [Ruby-0-Thread-23: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:331] jdbc - (0.043178s) SELECT * FROM pg_stat_user_tables [INFO ] 2019-08-02 18:30:02.340 [Ruby-0-Thread-24: :1] jdbc - (0.036469s) SELECT * FROM pg_stat_database ...

      If Logstash does not show any errors and logs that it has successfully SELECTed rows from the three databases, your database metrics will be shipped to Elasticsearch. If you get an error, double check all the values in the configuration file to ensure that the machine you’re running Logstash on can connect to the managed database.

      Logstash will continue importing the data at specified times. You can safely stop it by pressing CTRL+C.

      As previously mentioned, when started as a service, Logstash automatically runs all configuration files it finds under /etc/logstash/conf.d in the background. Run the following command to start it as a service:

      • sudo systemctl start logstash

      In this step, you ran Logstash to check if it can connect to your database and gather data. Next, you’ll visualize and explore some of the statistical data in Kibana.

      Step 4 — Exploring Imported Data in Kibana

      In this section, you’ll see how you can explore the statistical data describing your database’s performance in Kibana.

      In your browser, navigate to the Kibana installation you set up as a prerequisite. You’ll see the default welcome page.

      Kibana - Default Welcome Page

      To interact with Elasticsearch indexes in Kibana, you’ll need to create an index pattern. Index patterns specify on which indexes Kibana should operate. To create one, press on the last icon (wrench) from the left-hand vertical sidebar to open the Management page. Then, from the left menu, press on Index Patterns under Kibana. You’ll see a dialog box for creating an index pattern.

      Kibana - Add Index Pattern

      Listed are the three indexes where Logstash has been sending statistics. Type in pg_stat_database in the Index Pattern input box and then press Next step. You’ll be asked to select a field that stores time, so you’ll be able to later narrow your data by a time range. From the dropdown, select @timestamp.

      Kibana - Index Pattern Timestamp Field

      Press on Create index pattern to finish creating the index pattern. You’ll now be able to explore it using Kibana. To create a visualization, press on the second icon in the sidebar, and then on Create new visualization. Select the Line visualization when the form pops up, and then choose the index pattern you have just created (pg_stat_database). You’ll see an empty visualization.

      Kibana - Empty Visualisation

      On the central part of the screen is the resulting plot—the left-side panel governs its generation from which you can set the data for X and Y axis. In the upper right-hand side of the screen is the date range picker. Unless you specifically choose another range when configuring the data, that range will be shown on the plot.

      You’ll now visualize the average number of data tuples INSERTed on minutes in the given interval. Press on Y-Axis under Metrics in the panel on the left to unfold it. Select Average as the Aggregation and select tup_inserted as the Field. This will populate the Y axis of the plot with the average values.

      Next, press on X-Axis under Buckets. For the Aggregation, choose Date Histogram. @timestamp should be automatically selected as the Field. Then, press on the blue play button on the top of the panel to generate your graph. If your database is brand new and not used, you won’t see anything yet. In all cases, however, you will see an accurate portrayal of database usage.

      Kibana supports many other visualization forms—you can explore other forms in the Kibana documentation. You can also add the two remaining indexes, mentioned in Step 2, into Kibana to be able to visualize them as well.

      In this step, you have learned how to visualize some of the PostgreSQL statistical data, using Kibana.

      Step 5 — (Optional) Benchmarking Using pgbench

      If you haven’t yet worked in your database outside of this tutorial, you can complete this step to create more interesting visualizations by using pgbench to benchmark your database. pgbench will run the same SQL commands over and over, simulating real-world database use by an actual client.

      You’ll first need to install pgbench by running the following command:

      • sudo apt install postgresql-contrib -y

      Because pgbench will insert and update test data, you’ll need to create a separate database for it. To do so, head over to the Users & Databases tab in the Control Panel of your managed database, and scroll down to the Databases section. Type in pgbench as the name of the new database, and then press on Save. You’ll pass this name, as well as the host, port, and username information to pgbench.

      Accessing Databases section in DO control panel

      Before actually running pgbench, you’ll need to run it with the -i flag to initialize its database:

      • pgbench -h host -p port -U username -i pgbench

      You’ll need to replace host with your host address, port with the port to which you can connect to your database, and username with the database user username. You can find all these values in the Control Panel of your managed database.

      Notice that pgbench does not have a password argument; instead, you’ll be asked for it every time you run it.

      The output will look like the following:

      Output

      NOTICE: table "pgbench_history" does not exist, skipping NOTICE: table "pgbench_tellers" does not exist, skipping NOTICE: table "pgbench_accounts" does not exist, skipping NOTICE: table "pgbench_branches" does not exist, skipping creating tables... 100000 of 100000 tuples (100%) done (elapsed 0.16 s, remaining 0.00 s) vacuum... set primary keys... done.

      pgbench created four tables, which it will use for benchmarking, and populated them with some example rows. You’ll now be able to run benchmarks.

      The two most important arguments that limit for how long the benchmark will run are -t, which specifies the number of transactions to complete, and -T, which defines for how many seconds the benchmark should run. These two options are mutually exclusive. At the end of each benchmark, you’ll receive statistics, such as the number of transactions per second (tps).

      Now, start a benchmark that will last for 30 seconds by running the following command:

      • pgbench -h host -p port -U username pgbench -T 30

      The output will look like:

      Output

      starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 1 query mode: simple number of clients: 1 number of threads: 1 duration: 30 s number of transactions actually processed: 7602 latency average = 3.947 ms tps = 253.382298 (including connections establishing) tps = 253.535257 (excluding connections establishing)

      In this output, you see the general info about the benchmark, such as the total number of transactions executed. The effect of these benchmarks is that the statistics Logstash ships to Elasticsearch will reflect that number, which will in turn make visualizations in Kibana more interesting and closer to real-world graphs. You can run the preceding command a few more times, and possibly alter the duration.

      When you are done, head over to Kibana and press on Refresh in the upper right corner. You’ll now see a different line than before, which shows the number of INSERTs. Feel free to change the time range of the data shown by changing the values in the picker positioned above the refresh button. Here is how the graph may look after multiple benchmarks of varying duration:

      Kibana - Visualization After Benchmarks

      You’ve used pgbench to benchmark your database, and evaluated the resulting graphs in Kibana.

      Conclusion

      You now have the Elastic stack installed on your server and configured to pull statistics data from your managed PostgreSQL database on a regular basis. You can analyze and visualize the data using Kibana, or some other suitable software, which will help you gather valuable insights and real-world correlations into how your database is performing.

      For more information about what you can do with your PostgreSQL Managed Database, visit the product docs.



      Source link

      How to Benchmark the Performance of a Redis Server on Ubuntu 18.04


      Introduction

      Benchmarking is an important practice when it comes to analyzing the overall performance of database servers. It’s helpful for identifying bottlenecks as well as opportunities for improvement within those systems.

      Redis is an in-memory data store that can be used as database, cache and message broker. It supports from simple to complex data structures including hashes, strings, sorted sets, bitmaps, geospatial data, among other types. In this guide, we’ll demonstrate how to benchmark the performance of a Redis server running on Ubuntu 18.04, using a few different tools and methods.

      Prerequisites

      To follow this guide, you’ll need:

      Note: The commands demonstrated in this tutorial were executed on a dedicated Redis server running on a 4GB DigitalOcean Droplet.

      Redis comes with a benchmark tool called redis-benchmark. This program can be used to simulate an arbitrary number of clients connecting at the same time and performing actions on the server, measuring how long it takes for the requests to be completed. The resulting data will give you an idea of the average number of requests that your Redis server is able to handle per second.

      The following list details some of the common command options used with redis-benchmark:

      • -h: Redis host. Default is 127.0.0.1.
      • -p: Redis port. Default is 6379.
      • -a: If your server requires authentication, you can use this option to provide the password.
      • -c: Number of clients (parallel connections) to simulate. Default value is 50.
      • -n: How many requests to make. Default is 100000.
      • -d: Data size for SET and GET values, measured in bytes. Default is 3.
      • -t: Run only a subset of tests. For instance, you can use -t get,set to benchmark the performance of GET and SET commands.
      • -P: Use pipelining for performance improvements.
      • -q: Quiet mode, shows only the average requests per second information.

      For instance, if you want to check the average number of requests per second that your local Redis server can handle, you can use:

      You will get output similar to this, but with different numbers:

      Output

      PING_INLINE: 85178.88 requests per second PING_BULK: 83056.48 requests per second SET: 72202.16 requests per second GET: 94607.38 requests per second INCR: 84961.77 requests per second LPUSH: 78988.94 requests per second RPUSH: 88652.48 requests per second LPOP: 87950.75 requests per second RPOP: 80971.66 requests per second SADD: 80192.46 requests per second HSET: 84317.03 requests per second SPOP: 78125.00 requests per second LPUSH (needed to benchmark LRANGE): 84175.09 requests per second LRANGE_100 (first 100 elements): 52383.45 requests per second LRANGE_300 (first 300 elements): 21547.08 requests per second LRANGE_500 (first 450 elements): 14471.78 requests per second LRANGE_600 (first 600 elements): 9383.50 requests per second MSET (10 keys): 71225.07 requests per second

      You can also limit the tests to a subset of commands of your choice using the -t parameter. The following command shows the averages for the GET and SET commands only:

      • redis-benchmark -t set,get -q

      Output

      SET: 76687.12 requests per second GET: 82576.38 requests per second

      The default options will use 50 parallel connections to create 100000 requests to the Redis server. If you want to increase the number of parallel connections to simulate a peak in usage, you can use the -c option for that:

      • redis-benchmark -t set,get -q -c 1000

      Because this will use 1000 concurrent connections instead of the default 50, you should expect a decrease in performance:

      Output

      SET: 69444.45 requests per second GET: 70821.53 requests per second

      If you want detailed information in the output, you can remove the -q option. The following command will use 100 parallel connections to run 1000000 SET requests on the server:

      • redis-benchmark -t set -c 100 -n 1000000

      You will get output similar to this:

      Output

      ====== SET ====== 1000000 requests completed in 11.29 seconds 100 parallel clients 3 bytes payload keep alive: 1 95.22% <= 1 milliseconds 98.97% <= 2 milliseconds 99.86% <= 3 milliseconds 99.95% <= 4 milliseconds 99.99% <= 5 milliseconds 99.99% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 8 milliseconds 88605.35 requests per second

      The default settings use 3 bytes for key values. You can change this with the option -d. The following command will benchmark GET and SET commands using 1MB key values:

      • redis-benchmark -t set,get -d 1000000 -n 1000 -q

      Because the server is working with a much bigger payload this time, a significant decrease of performance is expected:

      Output

      SET: 1642.04 requests per second GET: 822.37 requests per second

      It is important to realize that even though these numbers are useful as a quick way to evaluate the performance of a Redis instance, they don't represent the maximum throughput a Redis instance can sustain. By using pipelining, applications can send multiple commands at once in order to improve the number of requests per second the server can handle. With redis-benchmark, you can use the -P option to simulate real world applications that make use of this Redis feature.

      To compare the difference, first run the redis-benchmark command with default values and no pipelining, for the GET and SET tests:

      • redis-benchmark -t get,set -q

      Output

      SET: 86281.27 requests per second GET: 89847.26 requests per second

      The next command will run the same tests, but will pipeline 8 commands together:

      • redis-benchmark -t get,set -q -P 8

      Output

      SET: 653594.81 requests per second GET: 793650.75 requests per second

      As you can see from the output, there is a substantial performance improvement with the use of pipelining.

      Checking Latency with redis-cli

      If you'd like a simple measurement of the average time a request takes to receive a response, you can use the Redis client to check for the average server latency. In the context of Redis, latency is a measure of how long does a ping command take to receive a response from the server.

      The following command will show real-time latency stats for your Redis server:

      You'll get output similar to this, showing an increasing number of samples and a variable average latency:

      Output

      min: 0, max: 1, avg: 0.18 (970 samples)

      This command will keep running indefinitely. You can stop it with a CTRL+C.

      To monitor latency over a certain period of time, you can use:

      • redis-cli --latency-history

      This will track latency averages over time, with a configurable interval that is set to 15 seconds by default. You will get output similar to this:

      Output

      min: 0, max: 1, avg: 0.18 (1449 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1444 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1446 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.17 (1449 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.16 (1444 samples) -- 15.00 seconds range min: 0, max: 1, avg: 0.17 (1445 samples) -- 15.01 seconds range min: 0, max: 1, avg: 0.16 (1445 samples) -- 15.01 seconds range ...

      Because the Redis server on our example is idle, there's not much variation between latency samples. If you have a peak in usage, however, this should be reflected as an increase in latency within the results.

      If you'd like to measure the system latency only, you can use --intrinsic-latency for that. The intrinsic latency is inherent to the environment, depending on factors such as hardware, kernel, server neighbors and other factors that aren't controlled by Redis.

      You can see the intrinsic latency as a baseline for your overall Redis performance. The following command will check for the intrinsic system latency, running a test for 30 seconds:

      • redis-cli --intrinsic-latency 30

      You should get output similar to this:

      Output

      … 498723744 total runs (avg latency: 0.0602 microseconds / 60.15 nanoseconds per run). Worst run took 22975x longer than the average latency.

      Comparing both latency tests can be helpful for identifying hardware or system bottlenecks that could affect the performance of your Redis server. Considering the total latency for a request to our example server has an average of 0.18 microseconds to complete, an intrinsic latency of 0.06 microseconds means that one third of the total request time is spent by the system in processes that aren't controlled by Redis.

      Memtier is a high-throughput benchmark tool for Redis and Memcached created by Redis Labs. Although very similar to redis-benchmark in various aspects, Memtier has several configuration options that can be tuned to better emulate the kind of load you might expect on your Redis server, in addition to offering cluster support.

      To get Memtier installed on your server, you'll need to compile the software from source. First, install the dependencies necessary to compile the code:

      • sudo apt-get install build-essential autoconf automake libpcre3-dev libevent-dev pkg-config zlib1g-dev

      Next, go to your home directory and clone the memtier_benchmark project from its Github repository:

      • cd
      • git clone https://github.com/RedisLabs/memtier_benchmark.git

      Navigate to the project directory and run the autoreconf command to generate the application configuration scripts:

      • cd memtier_benchmark
      • autoreconf -ivf

      Run the configure script in order to generate the application artifacts required for compiling:

      Now run make to compile the application:

      Once the build is finished, you can test the executable with:

      • ./memtier_benchmark --version

      This will give you the following output:

      Output

      memtier_benchmark 1.2.17 Copyright (C) 2011-2017 Redis Labs Ltd. This is free software. You may redistribute copies of it under the terms of the GNU General Public License <http://www.gnu.org/licenses/gpl.html>. There is NO WARRANTY, to the extent permitted by law.

      The following list contains some of the most common options used with the memtier_benchmark command:

      • -s: Server host. Default is localhost.
      • -p: Server port. Default is 6379.
      • -a: Authenticate requests using the provided password.
      • -n: Number of requests per client (default is 10000).
      • -c: Number of clients (default is 50).
      • -t: Number of threads (default is 4).
      • --pipeline: Enable pipelining.
      • --ratio: Ratio between SET and GET commands, default is 1:10.
      • --hide-histogram: Hides detailed output information.

      Most of these options are very similar to the options present in redis-benchmark, but Memtier tests performance in a different way. To simulate common real-world environments better, the default benchmark performed by memtier_benchmark will test for GET and SET requests only, on a ratio of 1 to 10. With 10 GET operations for each SET operation in the test, this arrangement is more representative of a common web application using Redis as a database or cache. You can adjust the ratio value with the option --ratio.

      The following command runs memtier_benchmark with default settings, while providing only high-level output information:

      • ./memtier_benchmark --hide-histogram

      Note: if you have configured your Redis server to require authentication, you should provide the -a option along with your Redis password to the memtier_benchmark command:

      • ./memtier_benchmark --hide-histogram -a your_redis_password

      You'll see output similar to this:

      Output

      ... 4 Threads 50 Connections per thread 10000 Requests per client ALL STATS ========================================================================= Type Ops/sec Hits/sec Misses/sec Latency KB/sec ------------------------------------------------------------------------- Sets 8258.50 --- --- 2.19800 636.05 Gets 82494.28 41483.10 41011.18 2.19800 4590.88 Waits 0.00 --- --- 0.00000 --- Totals 90752.78 41483.10 41011.18 2.19800 5226.93

      According to this run of memtier_benchmark, our Redis server can execute about 90 thousand operations per second in a 1:10 SET/GET ratio.

      It's important to note that each benchmark tool has its own algorithm for performance testing and data presentation. For that reason, it's normal to have slightly different results on the same server, even when using similar settings.

      Conclusion

      In this guide, we demonstrated how to perform benchmark tests on a Redis server using two distinct tools: the included redis-benchmark, and the memtier_benchmark tool developed by Redis Labs. We also saw how to check for the server latency using redis-cli. Based on the data obtained from these tests, you'll have a better understanding of what to expect from your Redis server in terms of performance, and what are the bottlenecks of your current setup.



      Source link