One place for hosting & domains

      Managed

      How To Migrate Redis Data to a DigitalOcean Managed Database


      Introduction

      There are a number of methods you can use to migrate data from one Redis instance to another, such as replication or snapshotting. However, migrations can get more complicated when you’re moving data to a Redis instance managed by a cloud provider, as managed databases often limit how much control you have over the database’s configuration.

      This tutorial outlines one method you can use to migrate data to a Redis instance managed by DigitalOcean. The method uses Redis’s internal migrate command to securely pass data through a TLS tunnel configured with stunnel. This guide will also go over a few other commonly-used migration strategies and why they’re problematic when migrating to a DigitalOcean Managed Database.

      Prerequisites

      To complete this tutorial, you will need:

      Note: To help keep things clear, this guide will refer to the Redis instance hosted on your Ubuntu server as the “source.” Likewise, it will refer to the instance managed by DigitalOcean as either the “target” or the “Managed Database.”

      Things To Consider When Migrating Redis Data to a Managed Database

      There are several methods you can employ to migrate data from one Redis instance to another. However, some of these approaches present problems when you’re migrating data to a Redis instance managed by DigitalOcean.

      For example, you can use replication to turn your target Redis instance into an exact copy of the source. To do this, you would connect to the target Redis server and run the replicaof command with the following syntax:

      • replicaof source_hostname_or_ip source_port

      This will cause the target instance to replicate all the data held on the source without destroying any data that was previously stored on it. Following this, you would promote the replica back to being a primary instance with the following command:

      However, Redis instances managed by DigitalOcean are configured to only become read-only replicas. If you have clients writing data to the source database, you won’t be able to configure them to write to the managed instance as it’s replicating data. This means you would lose any data sent by the clients after you promote the managed instance from being a replica and before you configure the clients to begin writing data to it, making replication suboptimal migration solution.

      Another method for migrating Redis data is to take a snapshot of the data held on your source instance with either Redis’s save or bgsave commands. Both of these commands export the snapshot to a file ending in .rdb, which you would then transfer to the target server. Following that, you’d restart the Redis service so it can load the data.

      However, many managed database providers — including DigitalOcean — don’t allow you to access the managed database server’s underlying file system. This means there’s no way to upload the snapshot file or make the necessary changes to the target database’s configuration file to allow the Redis to import the data.

      Because the configuration of DigitalOcean’s Managed Databases limit the efficacy of both replication and snapshotting as means of migrating data, this tutorial will instead use Redis’s migrate command to move data from the source to the target. The migrate command is designed to only move one key at a time, but we will use some handy command line tricks to move an entire Redis database with a single command.

      Step 1 — (Optional) Loading Your Source Redis Instance with Sample Data

      This optional step involves loading your source Redis instance with some sample data so you can experiment with migrating data to your Managed Redis Database. If you already have data that you want to migrate over to your target instance, you can move ahead to Step 2.

      To begin, run the following command to access your Redis server:

      If you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

      Then run the following commands. These will create a number of keys holding a few strings, a hash, a list, and a set:

      • mset string1 "Redis" string2 "is" string3 "fun!"
      • hmset hash1 field1 "Redis" field2 "is" field3 "fast!"
      • rpush list1 "Redis" "is" "feature-rich!"
      • sadd set1 "Redis" "is" "free!"

      Additionally, run the following expire commands to provide a few of these keys with a timeout. This will make them volatile, meaning that Redis will delete them after the specified amount of time, 7500 seconds:

      • expire string2 7500
      • expire hash1 7500
      • expire set1 7500

      With that, you have some example data you can export to your target Redis instance. You can keep the redis-cli prompt open for now, since we will run a few more commands from it in the next step in order to back up this data.

      Step 2 — Backing Up Your Data

      Previously, we discussed using Redis’s bgsave command to take a snapshot of a Redis database and migrate it to another instance. While we won’t use bgsave as a means of migrating Redis data, we will use it here to back up the data in case we encounter an error during the migration process.

      If you don’t already have it open, start by opening up the Redis command line interface:

      Also, if you’ve configured your Redis server to require password authentication, run the auth command followed by your Redis password:

      Next, run the bgsave command. This will create a snapshot of your current data set and export it to a dump file whose name ends in .rdb:

      Note: As mentioned in the previous Things To Consider section, you can take a snapshot of your Redis database with either the save or bgsave commands. The reason we use the bgsave command here is that the save command runs synchronously, meaning it will block any other clients connected to the database. Because of this, the save command documentation recommends that this command should almost never be run in a production environment.

      Instead, it suggests using the bgsave command which runs asynchronously. This will cause Redis to fork the database into two processes: the parent process will continue to serve clients while the child saves the database before exiting:

      Note that if clients add or modify data while the bgsave operation is running or after it finishes, these changes won’t be captured in the snapshot.

      Following that, you can close the connection to your Redis instance by running the exit command:

      If you need it in the future, you can find this dump file in your Redis installation’s working directory. If you’re not sure which directory this is, you can check by opening up your Redis configuration file with your preferred text editor. Here, we’ll use nano:

      • sudo nano /etc/redis/redis.conf

      Navigate to the line that begins with dbfilename. It will look like this by default:

      /etc/redis/redis.conf

      . . .
      # The filename where to dump the DB
      dbfilename dump.rdb
      . . .
      

      This directive defines the file to which Redis will export snapshots. The next line (after any comments) will look like this:

      /etc/redis/redis.conf

      . . .
      dir /var/lib/redis
      . . .
      

      The dir directive defines Redis’s working directory where any Redis snapshots are stored. By default, this is set to /var/lib/redis, as shown in this example.

      Close the redis.conf file. Assuming you didn’t make any changes to the file, you can do so by pressing CTRL+X.

      Then, list the contents of your Redis working directory to confirm that it’s holding the exported data dump file:

      If the dump file was exported correctly, you will see it in this command’s output:

      Output

      dump.rdb

      Once you’ve confirmed that you successfully backed up your data, you can begin the process of migrating it to your Managed Database.

      Step 3 — Migrating the Data

      Recall that this guide uses Redis’s internal migrate command to move keys one by one from the source database to the target. However, unlike the previous steps in this tutorial, we won’t run this command from the redis-cli prompt. Instead, we’ll run it directly from the server’s bash prompt. Doing so will allow us to use a few bash tricks to migrate all the keys on the source database with one command.

      Note: If you have clients writing data to your source Redis instance, now would be a good time to configure them to also write data to your Managed Database. This way, you can migrate the existing data from the source to your target without losing any writes that occur after the migration.

      Also, be aware that this migration command will not replace any existing keys on the target database unless one of the existing keys has the same name as a key you’re migrating.

      The migration will occur after running the following command. Before running it, though, we will break it down piece by piece:

      • redis-cli -n source_database -a source_password scan 0 | while read key; do redis-cli -n source_database -a source_password MIGRATE localhost 8000 "$key" target_database 1000 COPY AUTH managed_redis_password; done

      Let’s look at each part of this command separately:

      • redis-cli -n source_database -a source_password scan 0 . . .

      The first part of the command, redis-cli, opens a connection to the local Redis server. The -n flag specifies which of Redis’s logical databases to connect to. Redis has 16 databases out of the box (with the first being numbered 0, the second numbered 1, and so on), so source_database can be any number between 0 and 15. If your source instance only holds data on the default database (numbered 0), then you do not need to include the -n flag or specify a database number.

      Next, comes the -a flag and the source instance’s password, which together authenticate the connection. If your source instance does not require password authentication, then you do not need to include the -a flag.

      It then runs Redis’s scan command, which iterates over the keys held in the data set and returns them as a list. scan requires that you follow it with a cursor — the iteration begins when the cursor is set to 0, and terminates when the server returns a 0 cursor. Hence, we follow scan with a cursor of 0 so as to iterate over every key in the set.

      • . . . | while read key; do . . .

      The next part of the command begins with a vertical bar (|). In Unix-like systems, vertical bars are known as pipes and are used to direct the output of one process to the input of another.

      Following this is the start of a while loop. In bash, as well as in most programming languages, a while loop is a control flow statement that lets you repeat a certain process, code, or command as long as a certain condition remains true.

      The condition in this case is the sub-command read key, which reads the piped input and assigns it to the variable key. The semicolon (;) signifies the end of the while loop’s conditional statement, and the do following it precedes the action to be repeated as long as the while expression remains true. Every time the do statement completes, the conditional statement will read the next line piped from the scan command and assign that input to the key variable.

      Essentially, this section says “as long as there is output from the scan command to be read, perform the following action.”

      • . . . redis-cli -n source_database -a source_password migrate localhost 8000 "$key" . . .

      This section of the command is what performs the actual migration. After another redis-cli call, it once again specifies the source database number with the -n flag and authenticates with the -a flag. You have to include these again because this redis-cli call is distinct from the one at the start of the command. Again, though, you do not need to include the -n flag or database number if your source Redis instance only holds data in the default 0 database, and you don’t need to include the -a flag if it doesn’t require password authentication.

      Following this is the migrate command. Any time you use the migrate command, you must follow it with the target database’s hostname or IP address and its port number. Here, we follow the convention established in the prerequisite stunnel tutorial and point the migrate command to localhost at port 8000.

      $key is the variable defined in the first part of the while loop, and represents the keys from each line of the scan command’s output.

      • . . . target_database 1000 copy auth managed_redis_password; done

      This section is a continuation of the migrate command. It begins with target_database, which represents the logical database on the target instance where you want to store the data. Again, this can be any number from 0 to 15.

      Next is a number representing a timeout. This timeout is the maximum amount of idle communication time between the two machines. Note that this isn’t a time limit for the operation, just that the operation should always make some level of progress within the defined timeout. Both the database number and timeout arguments are required for every migrate command.

      Following the timeout is the optional copy flag. By default, migrate will delete each key from the source database after it transfers them to the target; by including this option, though, you’re instructing the migrate command to merely copy the keys so they will persist on the source.

      After copy comes the auth flag followed by your Managed Redis Database’s password. This isn’t necessary if you’re migrating data to an instance that doesn’t require authentication, but it is necessary when you’re migrating data to one managed by DigitalOcean.

      Following this is another semicolon, indicating the end of the action to be performed as long as the while condition holds true. Finally, the command closes with done, indicating the end of the loop. The command checks the condition in the while statement and repeats the action in the do statement until it’s no longer true.

      All together, this command performs the following steps:

      • Scan a database on the source Redis instance and return every key held within it
      • Pass each line of the scan command’s output into a while loop
      • Read the first line and assign its content to the key variable
      • Migrate any key in the source database that matches the key variable to a database on the Redis instance at the other end of the TLS tunnel held on localhost at port 8000
      • Go back and read the next line, and repeat the process until there are no more keys to read

      Now that we’ve gone over each part of the migration command, you can go ahead and run it.

      If your source instance only has data on the default 0 database, you do not need to include either of the -n flags or their arguments. If, however, you’re migrating data from any database other than 0 on your source instance, you must include the -n flags and change both occurrences of source_database to align with the database you want to migrate.

      If your source database requires password authentication, be sure to change source_password to the Redis instance’s actual password. If it doesn’t, though, make sure that you remove both occurrences of -a source_password from the command. Also, change managed_database_password to your own Managed Database’s password and be sure to change target_database to the number of whichever logical database on your target instance that you want to write the data to:

      Note: If you don’t have your Managed Redis Database’s password on hand, you can find it by first navigating to the DigitalOcean Control Panel. From there, click on Databases in the left-hand sidebar menu and then click on the name of the Redis instance to which you want to migrate the data. Scroll down to the Connection Details section where you’ll find a field labeled password. Click on the show button to reveal the password, then copy and paste it into the migration command — replacing managed_redis_password — in order to authenticate.

      • redis-cli -n source_database -a source_password scan 0 | while read key; do redis-cli -n source_database -a source_password MIGRATE localhost 8000 "$key" target_database 1000 COPY AUTH managed_redis_password; done

      You will see output similar to the following:

      Output

      NOKEY OK OK OK OK OK OK

      Note: Notice the first line of the command’s output which reads NOKEY. To understand what this means, run the first part of the migration command by itself:

      • redis-cli -n source_database -a source_password scan 0

      If you migrated the sample data added in Step 2, this command’s output will look like this:

      Output

      1) "0" 2) 1) "hash1" 2) "string3" 3) "list1" 4) "string1" 5) "string2" 6) "set1"

      The value "0" held in the first line is not a key held in your source Redis database, but a cursor returned by the scan command. Since there aren’t any keys on the server named “0”, there’s nothing there for the migrate command to send to your target instance and it returns NOKEY.

      However, the command doesn’t fail and exit. Instead, it continues on by reading and migrating the keys found in the next lines of the scan command’s output.

      To test whether the migration was successful, connect to your Managed Redis Database:

      • redis-cli -h localhost -p 8000 -a managed_redis_password

      If you migrated data to any logical database other than the default, connect to that database with the select command:

      Run a scan command to see what keys are held there:

      If you completed Step 2 of this tutorial and added the example data to your source database, you will see output like this:

      Output

      1) "0" 2) 1) "set1" 2) "string2" 3) "hash1" 4) "list1" 5) "string3" 6) "string1"

      Lastly, run a ttl command on any key which you’ve set to expire in order to confirm that it is still volatile:

      Output

      (integer) 3944

      This output shows that even though you migrated the key to your Managed Database, it still set to expire based on the expireat command you ran previously.

      Once you’ve confirmed that all the keys on your source Redis database were exported to your target successfully, you can close your connection to the Managed Database. If you have clients writing data to the source Redis instance and you’ve already configured them to send their writes to the target, you can at this point configure them to stop sending data to the source.

      Conclusion

      By completing this tutorial, you will have moved data from your self-managed Redis data store to a Redis instance managed by DigitalOcean. The process outlined in this guide may not be optimal in every case. For example, you’d have to run the migration command multiple times (once for every logical database holding data) if your source instance is using databases other than the default one. However, when compared to other methods like replication or snapshotting, it is a fairly straightforward process that works well with a DigitalOcean Managed Database’s configuration.

      Now that you’re using a DigitalOcean Managed Redis Database to store your data, you could measure its performance by running some benchmarking tests. Also, if you’re new to working with Redis, you could check out our series on How To Manage a Redis Database.



      Source link

      How to Set Up a Scalable Laravel 6 Application using Managed Databases and Object Storage


      Introduction

      When scaling web applications horizontally, the first difficulties you’ll typically face are dealing with file storage and data persistence. This is mainly due to the fact that it is hard to maintain consistency of variable data between multiple application nodes; appropriate strategies must be in place to make sure data created in one node is immediately available to other nodes in a cluster.

      A practical way of solving the consistency problem is by using managed databases and object storage systems. The first will outsource data persistence to a managed database, and the latter will provide a remote storage service where you can keep static files and variable content such as images uploaded by users. Each node can then connect to these services at the application level.

      The following diagram demonstrates how such a setup can be used for horizontal scalability in the context of PHP applications:

      Laravel at scale diagram

      In this guide, we will update an existing Laravel 6 application to prepare it for horizontal scalability by connecting it to a managed MySQL database and setting up an S3-compatible object store to save user-generated files. By the end, you will have a travel list application running on an Nginx + PHP-FPM web server:

      Travellist v1.0

      Note: this guide uses DigitalOcean Managed MySQL and Spaces to demonstrate a scalable application setup using managed databases and object storage. The instructions contained here should work in a similar way for other service providers.

      Prerequisites

      To begin this tutorial, you will first need the following prerequisites:

      • Access to an Ubuntu 18.04 server as a non-root user with sudo privileges, and an active firewall installed on your server. To set these up, please refer to our Initial Server Setup Guide for Ubuntu 18.04.
      • Nginx and PHP-FPM installed and configured on your server, as explained in steps 1 and 3 of How to Install LEMP on Ubuntu 18.04. You should skip the step where MySQL is installed.
      • Composer installed on your server, as explained in steps 1 and 2 of How to Install and Use Composer on Ubuntu 18.04.
      • Admin credentials to a managed MySQL 8 database. For this guide, we’ll be using a DigitalOcean Managed MySQL cluster, but the instructions here should work similarly for other managed database services.
      • A set of API keys with read and write permissions to an S3-compatible object storage service. In this guide, we’ll use DigitalOcean Spaces, but you are free to use a provider of your choice.
      • The s3cmd tool installed and configured to connect to your object storage drive. For instructions on how to set this up for DigitalOcean Spaces, please refer to our product documentation.

      Step 1 — Installing the MySQL 8 Client

      The default Ubuntu apt repositories come with the MySQL 5 client, which is not compatible with the MySQL 8 server we’ll be using in this guide. To install the compatible MySQL client, we’ll need to use the MySQL APT Repository provided by Oracle.

      Begin by navigating to the MySQL APT Repository page in your web browser. Find the Download button in the lower-right corner and click through to the next page. This page will prompt you to log in or sign up for an Oracle web account. You can skip that and instead look for the link that says No thanks, just start my download. Copy the link address and go back to your terminal window.

      This link should point to a .deb package that will set up the MySQL APT Repository in your server. After installing it, you’ll be able to use apt to install the latest releases of MySQL. We’ll use curl to download this file into a temporary location.

      Go to your server’s tmp folder:

      Now download the package with curl and using the URL you copied from the MySQL APT Repository page:

      • curl -OL https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb

      After the download is finished, you can use dpkg to install the package:

      • sudo dpkg -i mysql-apt-config_0.8.13-1_all.deb

      You will be presented with a screen where you can choose which MySQL version you’d like to select as default, as well as which MySQL components you’re interested in:

      MySQL APT Repository Install

      You don’t need to change anything here, because the default options will install the repositories we need. Select “Ok” and the configuration will be finished.

      Next, you’ll need to update your apt cache with:

      Now we can finally install the MySQL 8 client with:

      • sudo apt install mysql-client

      Once that command finishes, check the software version number to ensure that you have the latest release:

      You’ll see output like this:

      Output

      mysql Ver 8.0.18 for Linux on x86_64 (MySQL Community Server - GPL)

      In the next step, we’ll use the MySQL client to connect to your managed MySQL server and prepare the database for the application.

      Step 2 — Creating a new MySQL User and Database

      At the time of this writing, the native MySQL PHP library mysqlnd doesn’t support caching_sha2_authentication, the default authentication method for MySQL 8. We’ll need to create a new user with the mysql_native_password authentication method in order to be able to connect our Laravel application to the MySQL 8 server. We’ll also create a dedicated database for our demo application.

      To get started, log into your server using an admin account. Replace the highlighted values with your own MySQL user, host, and port:

      • mysql -u MYSQL_USER -p -h MYSQL_HOST -P MYSQL_PORT

      When prompted, provide the admin user’s password. After logging in, you will have access to the MySQL 8 server command line interface.

      First, we’ll create a new database for the application. Run the following command to create a new database named travellist:

      • CREATE DATABASE travellist;

      Next, we’ll create a new user and set a password, using mysql_native_password as default authentication method for this user. You are encouraged to replace the highlighted values with values of your own, and to use a strong password:

      • CREATE USER "http://www.digitalocean.com/travellist-user'@'%' IDENTIFIED WITH mysql_native_password BY "http://www.digitalocean.com/MYSQL_PASSWORD';

      Now we need to give this user permission over our application database:

      • GRANT ALL ON travellist.* TO "http://www.digitalocean.com/travellist-user'@'%';

      You can now exit the MySQL prompt with:

      You now have a dedicated database and a compatible user to connect from your Laravel application. In the next step, we’ll get the application code and set up configuration details, so your app can connect to your managed MySQL database.

      In this guide, we’ll use Laravel Migrations and database seeds to set up our application tables. If you need to migrate an existing local database to a DigitalOcean Managed MySQL database, please refer to our documentation on How to Import MySQL Databases into DigitalOcean Managed Databases.

      Step 3 — Setting Up the Demo Application

      To get started, we’ll fetch the demo Laravel application from its Github repository. Feel free to inspect the contents of the application before running the next commands.

      The demo application is a travel bucket list app that was initially developed in our guide on How to Install and Configure Laravel with LEMP on Ubuntu 18.04. The updated app now contains visual improvements including travel photos that can be uploaded by a visitor, and a world map. It also introduces a database migration script and database seeds to create the application tables and populate them with sample data, using artisan commands.

      To obtain the application code that is compatible with this tutorial, we’ll download the 1.1 release from the project’s repository on Github. We’ll save the downloaded zip file as travellist.zip inside our home directory:

      • cd ~
      • curl -L https://github.com/do-community/travellist-laravel-demo/archive/1.1.zip -o travellist.zip

      Now, unzip the contents of the application and rename its directory with:

      • unzip travellist.zip
      • mv travellist-laravel-demo-1.1 travellist

      Navigate to the travellist directory:

      Before going ahead, we’ll need to install a few PHP modules that are required by the Laravel framework, namely: php-xml, php-mbstring, php-xml and php-bcmath. To install these packages, run:

      • sudo apt install unzip php-xml php-mbstring php-xml php-bcmath

      To install the application dependencies, run:

      You will see output similar to this:

      Output

      Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 80 installs, 0 updates, 0 removals - Installing doctrine/inflector (v1.3.0): Downloading (100%) - Installing doctrine/lexer (1.1.0): Downloading (100%) - Installing dragonmantank/cron-expression (v2.3.0): Downloading (100%) - Installing erusev/parsedown (1.7.3): Downloading (100%) ... Generating optimized autoload files > IlluminateFoundationComposerScripts::postAutoloadDump > @php artisan package:discover --ansi Discovered Package: beyondcode/laravel-dump-server Discovered Package: fideloper/proxy Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: nunomaduro/collision Package manifest generated successfully.

      The application dependencies are now installed. Next, we’ll configure the application to connect to the managed MySQL Database.

      Creating the .env configuration file and setting the App Key

      We’ll now create a .env file containing variables that will be used to configure the Laravel application in a per-environment basis. The application includes an example file that we can copy and then modify its values to reflect our environment settings.

      Copy the .env.example file to a new file named .env:

      Now we need to set the application key. This key is used to encrypt session data, and should be set to a unique 32 characters-long string. We can generate this key automatically with the artisan tool:

      Let’s edit the environment configuration file to set up the database details. Open the .env file using your command line editor of choice. Here, we will be using nano:

      Look for the database credentials section. The following variables need your attention:

      DB_HOST: your managed MySQL server host.
      DB_PORT: your managed MySQL server port.
      DB_DATABASE: the name of the application database we created in Step 2.
      DB_USERNAME: the database user we created in Step 2.
      DB_PASSWORD: the password for the database user we defined in Step 2.

      Update the highlighted values with your own managed MySQL info and credentials:

      ...
      DB_CONNECTION=mysql
      DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      ...
      

      Save and close the file by typing CTRL+X then Y and ENTER when you’re done editing.

      Now that the application is configured to connect to the MySQL database, we can use Laravel’s command line tool artisan to create the database tables and populate them with sample data.

      Migrating and populating the database

      We’ll now use Laravel Migrations and database seeds to set up the application tables. This will help us determine if our database configuration works as expected.

      To execute the migration script that will create the tables used by the application, run:

      You will see output similar to this:

      Output

      Migration table created successfully. Migrating: 2019_09_19_123737_create_places_table Migrated: 2019_09_19_123737_create_places_table (0.26 seconds) Migrating: 2019_10_14_124700_create_photos_table Migrated: 2019_10_14_124700_create_photos_table (0.42 seconds)

      To populate the database with sample data, run:

      You will see output like this:

      Output

      Seeding: PlacesTableSeeder Seeded: PlacesTableSeeder (0.86 seconds) Database seeding completed successfully.

      The application tables are now created and populated with sample data.

      To finish the application setup, we also need to create a symbolic link to the public storage folder that will host the travel photos we’re using in the application. You can do that using the artisan tool:

      Output

      The [public/storage] directory has been linked.

      This will create a symbolic link inside the public directory pointing to storage/app/public, where we’ll save the travel photos. To check that the link was created and where it points to, you can run:

      You’ll see output like this:

      Output

      total 36 drwxrwxr-x 5 sammy sammy 4096 Oct 25 14:59 . drwxrwxr-x 12 sammy sammy 4096 Oct 25 14:58 .. -rw-rw-r-- 1 sammy sammy 593 Oct 25 06:29 .htaccess drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 css -rw-rw-r-- 1 sammy sammy 0 Oct 25 06:29 favicon.ico drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 img -rw-rw-r-- 1 sammy sammy 1823 Oct 25 06:29 index.php drwxrwxr-x 2 sammy sammy 4096 Oct 25 06:29 js -rw-rw-r-- 1 sammy sammy 24 Oct 25 06:29 robots.txt lrwxrwxrwx 1 sammy sammy 41 Oct 25 14:59 storage -> /home/sammy/travellist/storage/app/public -rw-rw-r-- 1 sammy sammy 1194 Oct 25 06:29 web.config

      Running the test server (optional)

      You can use the artisan serve command to quickly verify that everything is set up correctly within the application, before having to configure a full-featured web server like Nginx to serve the application for the long term.

      We’ll use port 8000 to temporarily serve the application for testing. If you have the UFW firewall enabled on your server, you should first allow access to this port with:

      Now, to run the built in PHP server that Laravel exposes through the artisan tool, run:

      • php artisan serve --host=0.0.0.0 --port=8000

      This command will block your terminal until interrupted with a CTRL+C. It will use the built-in PHP web server to serve the application for test purposes on all network interfaces, using port 8000.

      Now go to your browser and access the application using the server’s domain name or IP address on port 8000:

      http://server_domain_or_IP:8000
      

      You will see the following page:

      Travellist v1.0

      If you see this page, it means the application is successfully pulling data about locations and photos from the configured managed database. The image files are still stored in the local disk, but we’ll change this in a following step of this guide.

      When you are finished testing the application, you can stop the serve command by hitting CTRL+C.

      Don’t forget to close port 8000 again if you are running UFW on your server:

      Step 4 — Configuring Nginx to Serve the Application

      Although the built-in PHP web server is very useful for development and testing purposes, it is not intended to be used as a long term solution to serve PHP applications. Using a full featured web server like Nginx is the recommended way of doing that.

      To get started, we’ll move the application folder to /var/www, which is the usual location for web applications running on Nginx. First, use the mv command to move the application folder with all its contents to /var/www/travellist:

      • sudo mv ~/travellist /var/www/travellist

      Now we need to give the web server user write access to the storage and bootstrap/cache folders, where Laravel stores application-generated files. We’ll set these permissions using setfacl, a command line utility that allows for more robust and fine-grained permission settings in files and folders.

      To include read, write and execution (rwx) permissions to the web server user over the required directories, run:

      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/storage
      • sudo setfacl -R -m g:www-data:rwx /var/www/travellist/bootstrap/cache

      The application files are now in order, but we still need to configure Nginx to serve the content. To do this, we’ll create a new virtual host configuration file at /etc/nginx/sites-available:

      • sudo nano /etc/nginx/sites-available/travellist

      The following configuration file contains the recommended settings for Laravel applications on Nginx:

      /etc/nginx/sites-available/travellist

      server {
          listen 80;
          server_name server_domain_or_IP;
          root /var/www/travellist/public;
      
          add_header X-Frame-Options "SAMEORIGIN";
          add_header X-XSS-Protection "1; mode=block";
          add_header X-Content-Type-Options "nosniff";
      
          index index.html index.htm index.php;
      
          charset utf-8;
      
          location / {
              try_files $uri $uri/ /index.php?$query_string;
          }
      
          location = /favicon.ico { access_log off; log_not_found off; }
          location = /robots.txt  { access_log off; log_not_found off; }
      
          error_page 404 /index.php;
      
          location ~ .php$ {
              fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
              fastcgi_index index.php;
              fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
              include fastcgi_params;
          }
      
          location ~ /.(?!well-known).* {
              deny all;
          }
      }
      

      Copy this content to your /etc/nginx/sites-available/travellist file and adjust the highlighted values to align with your own configuration. Save and close the file when you’re done editing.

      To activate the new virtual host configuration file, create a symbolic link to travellist in sites-enabled:

      • sudo ln -s /etc/nginx/sites-available/travellist /etc/nginx/sites-enabled/

      Note: If you have another virtual host file that was previously configured for the same server_name used in the travellist virtual host, you might need to deactivate the old configuration by removing the corresponding symbolic link inside /etc/nginx/sites-enabled/.

      To confirm that the configuration doesn’t contain any syntax errors, you can use:

      You should see output like this:

      Output

      • nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
      • nginx: configuration file /etc/nginx/nginx.conf test is successful

      To apply the changes, reload Nginx with:

      • sudo systemctl reload nginx

      If you reload your browser now, the application images will be broken. That happens because we moved the application directory to a new location inside the server, and for that reason we need to re-create the symbolic link to the application storage folder.

      Remove the old link with:

      • cd /var/www/travellist
      • rm -f public/storage

      Now run once again the artisan command to generate the storage link:

      Now go to your browser and access the application using the server’s domain name or IP address, as defined by the server_name directive in your configuration file:

      http://server_domain_or_IP
      

      Travellist v1.0

      In the next step, we’ll integrate an object storage service into the application. This will replace the current local disk storage used for the travel photos.

      Step 5 — Integrating an S3-Compatible Object Storage into the Application

      We’ll now set up the application to use an S3-compatible object storage service for storing the travel photos exhibited on the index page. Because the application already has a few sample photos stored in the local disk, we’ll also use the s3cmd tool to upload the existing local image files to the remote object storage.

      Setting Up the S3 Driver for Laravel

      Laravel uses league/flysystem, a filesystem abstraction library that enables a Laravel application to use and combine multiple storage solutions, including local disk and cloud services. An additional package is required to use the s3 driver. We can install this package using the composer require command.

      Access the application directory:

      • composer require league/flysystem-aws-s3-v3

      You will see output similar to this:

      Output

      Using version ^1.0 for league/flysystem-aws-s3-v3 ./composer.json has been updated Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 8 installs, 0 updates, 0 removals - Installing mtdowling/jmespath.php (2.4.0): Loading from cache - Installing ralouphie/getallheaders (3.0.3): Loading from cache - Installing psr/http-message (1.0.1): Loading from cache - Installing guzzlehttp/psr7 (1.6.1): Loading from cache - Installing guzzlehttp/promises (v1.3.1): Loading from cache - Installing guzzlehttp/guzzle (6.4.1): Downloading (100%) - Installing aws/aws-sdk-php (3.112.28): Downloading (100%) - Installing league/flysystem-aws-s3-v3 (1.0.23): Loading from cache ...

      Now that the required packages are installed, we can update the application to connect to the object storage. First, we’ll open the .env file again to set up configuration details such as keys, bucket name, and region for your object storage service.

      Open the .env file:

      Include the following environment variables, replacing the highlighted values with your object store configuration details:

      /var/www/travellist/.env

      DO_SPACES_KEY=EXAMPLE7UQOTHDTF3GK4
      DO_SPACES_SECRET=exampleb8e1ec97b97bff326955375c5
      DO_SPACES_ENDPOINT=https://ams3.digitaloceanspaces.com
      DO_SPACES_REGION=ams3
      DO_SPACES_BUCKET=sammy-travellist
      

      Save and close the file when you’re done. Now open the config/filesystems.php file:

      • nano config/filesystems.php

      Within this file, we’ll create a new disk entry in the disks array. We’ll name this disk spaces, and we’ll use the environment variables we’ve set in the .env file to configure the new disk. Include the following entry in the disks array:

      config/filesystems.php

      
      'spaces' => [
         'driver' => 's3',
         'key' => env('DO_SPACES_KEY'),
         'secret' => env('DO_SPACES_SECRET'),
         'endpoint' => env('DO_SPACES_ENDPOINT'),
         'region' => env('DO_SPACES_REGION'),
         'bucket' => env('DO_SPACES_BUCKET'),
      ],
      
      

      Still in the same file, locate the cloud entry and change it to set the new spaces disk as default cloud filesystem disk:

      config/filesystems.php

      'cloud' => env('FILESYSTEM_CLOUD', "http://www.digitalocean.com/spaces'),
      

      Save and close the file when you’re done editing. From your controllers, you can now use the Storage::cloud() method as a shortcut to access the default cloud disk. This way, the application stays flexible to use multiple storage solutions, and you can switch between providers on a per-environment basis.

      The application is now configured to use object storage, but we still need to update the code that uploads new photos to the application.

      Let’s first examine the current uploadPhoto route, located in the PhotoController class. Open the file using your text editor:

      • nano app/Http/Controllers/PhotoController.php

      app/Http/Controllers/PhotoController.php

      …
      
      public function uploadPhoto(Request $request)
      {
         $photo = new Photo();
         $place = Place::find($request->input('place'));
      
         if (!$place) {
             //add new place
             $place = new Place();
             $place->name = $request->input('place_name');
             $place->lat = $request->input('place_lat');
             $place->lng = $request->input('place_lng');
         }
      
         $place->visited = 1;
         $place->save();
      
         $photo->place()->associate($place);
         $photo->image = $request->image->store('/', 'public');
         $photo->save();
      
         return redirect()->route('Main');
      }
      
      

      This method accepts a POST request and creates a new photo entry in the photos table. It begins by checking if an existing place was selected in the photo upload form, and if that’s not the case, it will create a new place using the provided information. The place is then set to visited and saved to the database. Following that, an association is created so that the new photo is linked to the designated place. The image file is then stored in the root folder of the public disk. Finally, the photo is saved to the database. The user is then redirected to the main route, which is the index page of the application.

      The highlighted line in this code is what we’re interested in. In that line, the image file is saved to the disk using the store method. The store method is used to save files to any of the disks defined in the filesystem.php configuration file. In this case, it is using the default disk to store uploaded images.

      We will change this behavior so that the image is saved to the object store instead of the local disk. In order to do that, we need to replace the public disk by the spaces disk in the store method call. We also need to make sure the uploaded file’s visibility is set to public instead of private.

      The following code contains the full PhotoController class, including the updated uploadPhoto method:

      app/Http/Controllers/PhotoController.php

      <?php
      
      namespace AppHttpControllers;
      
      use IlluminateHttpRequest;
      use AppPhoto;
      use AppPlace;
      use IlluminateSupportFacadesStorage;
      
      class PhotoController extends Controller
      {
         public function uploadForm()
         {
             $places = Place::all();
      
             return view('upload_photo', [
                 'places' => $places
             ]);
         }
      
         public function uploadPhoto(Request $request)
         {
             $photo = new Photo();
             $place = Place::find($request->input('place'));
      
             if (!$place) {
                 //add new place
                 $place = new Place();
                 $place->name = $request->input('place_name');
                 $place->lat = $request->input('place_lat');
                 $place->lng = $request->input('place_lng');
             }
      
             $place->visited = 1;
             $place->save();
      
             $photo->place()->associate($place);
             $photo->image = $request->image->store('/', 'spaces');
             Storage::setVisibility($photo->image, 'public');
             $photo->save();
      
             return redirect()->route('Main');
         }
      }
      
      
      

      Copy the updated code to your own PhotoController so that it reflects the highlighted changes. Save and close the file when you’re done editing.

      We still need to modify the application’s main view so that it uses the object storage file URL to render the image. Open the travel_list.blade.php template:

      • nano resources/views/travel_list.blade.php

      Now locate the footer section of the page, which currently looks like this:

      resources/views/travel_list.blade.php

      @section('footer')
         <h2>Travel Photos <small>[ <a href="{{ route('Upload.form') }}">Upload Photo</a> ]</small></h2>
         @foreach ($photos as $photo)
             <div class="photo">
                <img src="https://www.digitalocean.com/{{ asset('storage') . '/' . $photo->image }}" />
                 <p>{{ $photo->place->name }}</p>
             </div>
         @endforeach
      
      @endsection
      

      Replace the current image src attribute to use the file URL from the spaces storage disk:

      <img src="https://www.digitalocean.com/{{ Storage::disk('spaces')->url($photo->image) }}" />
      

      If you go to your browser now and reload the application page, it will show only broken images. That happens because the image files for those travel photos are still only in the local disk. We need to upload the existing image files to the object storage, so that the photos already stored in the database can be successfully exhibited in the application page.

      Syncing local images with s3cmd

      The s3cmd tool can be used to sync local files with an S3-compatible object storage service. We’ll run a sync command to upload all files inside storage/app/public/photos to the object storage service.

      Access the public app storage directory:

      • cd /var/www/travellist/storage/app/public

      To have a look at the files already stored in your remote disk, you can use the s3cmd ls command:

      • s3cmd ls s3://your_bucket_name

      Now run the sync command to upload existing files in the public storage folder to the object storage:

      • s3cmd sync ./ s3://your_bucket_name --acl-public --exclude=.gitignore

      This will synchronize the current folder (storage/app/public) with the remote object storage’s root dir. You will get output similar to this:

      Output

      upload: './bermudas.jpg' -> 's3://sammy-travellist/bermudas.jpg' [1 of 3] 2538230 of 2538230 100% in 7s 329.12 kB/s done upload: './grindavik.jpg' -> 's3://sammy-travellist/grindavik.jpg' [2 of 3] 1295260 of 1295260 100% in 5s 230.45 kB/s done upload: './japan.jpg' -> 's3://sammy-travellist/japan.jpg' [3 of 3] 8940470 of 8940470 100% in 24s 363.61 kB/s done Done. Uploaded 12773960 bytes in 37.1 seconds, 336.68 kB/s.

      Now, if you run s3cmd ls again, you will see that three new files were added to the root folder of your object storage bucket:

      • s3cmd ls s3://your_bucket_name

      Output

      2019-10-25 11:49 2538230 s3://sammy-travellist/bermudas.jpg 2019-10-25 11:49 1295260 s3://sammy-travellist/grindavik.jpg 2019-10-25 11:49 8940470 s3://sammy-travellist/japan.jpg

      Go to your browser and reload the application page. All images should be visible now, and if you inspect them using your browser debug tools, you’ll notice that they’re all using URLs from your object storage.

      Testing the Integration

      The demo application is now fully functional, storing files in a remote object storage service, and saving data to a managed MySQL database. We can now upload a few photos to test our setup.

      Access the /upload application route from your browser:

      http://server_domain_or_IP/upload
      

      You will see the following form:

      Travellist  Photo Upload Form

      You can now upload a few photos to test the object storage integration. After choosing an image from your computer, you can select an existing place from the dropdown menu, or you can add a new place by providing its name and geographic coordinates so it can be loaded in the application map.

      Step 6 — Scaling Up a DigitalOcean Managed MySQL Database with Read-Only Nodes (Optional)

      Because read-only operations are typically more frequent than writing operations on database servers, its is a common practice to scale up a database cluster by setting up multiple read-only nodes. This will distribute the load generated by SELECT operations.

      To demonstrate this setup, we’ll first add 2 read-only nodes to our DigitalOcean Managed MySQL cluster. Then, we’ll configure the Laravel application to use these nodes.

      Access the DigitalOcean Cloud Panel and follow these instructions:

      1. Go to Databases and select your MySQL cluster.
      2. Click Actions and choose Add a read-only node from the drop-down menu.
      3. Configure the node options and hit the Create button. Notice that it might take several minutes for the new node to be ready.
      4. Repeat steps 1-4 one more time so that you have 2 read-only nodes.
      5. Note down the hosts of the two nodes as we will need them for our Laravel configuration.

      Once you have your read-only nodes ready, head back to your terminal.

      We’ll now configure our Laravel application to work with multiple database nodes. When we’re finished, queries such as INSERT and UPDATE will be forwarded to your primary cluster node, while all SELECT queries will be redirected to your read-only nodes.

      First, go to the application’s directory on the server and open your .env file using your text editor of choice:

      • cd /var/www/travellist
      • nano .env

      Locate the MySQL database configuration and comment out the DB_HOST line:

      /var/www/travellist/.env

      DB_CONNECTION=mysql
      #DB_HOST=MANAGED_MYSQL_HOST
      DB_PORT=MANAGED_MYSQL_PORT
      DB_DATABASE=MANAGED_MYSQL_DB
      DB_USERNAME=MANAGED_MYSQL_USER
      DB_PASSWORD=MANAGED_MYSQL_PASSWORD
      

      Save and close the file when you’re done. Now open the config/database.php in your text editor:

      Look for the mysql entry inside the connections array. You should include three new items in this configuration array: read, write, and sticky. The read and write entries will set up the cluster nodes, and the sticky option set to true will reuse write connections so that data written to the database is immediately available in the same request cycle. You can set it to false if you don’t want this behavior.

      /var/www/travel_list/config/database.php

      ...
            'mysql' => [
               'read' => [
                 'host' => [
                    "http://www.digitalocean.com/READONLY_NODE1_HOST',
                    "http://www.digitalocean.com/READONLY_NODE2_HOST',
                 ],
               ],
               'write' => [
                 'host' => [
                   "http://www.digitalocean.com/MANAGED_MYSQL_HOST',
                 ],
               ],
             'sticky' => true,
      ...
      

      Save and close the file when you are done editing. To test that everything works as expected, we can create a temporary route inside routes/web.php to pull some data from the database and show details about the connection being used. This way we will be able to see how the requests are being load balanced between the read-only nodes.

      Open the routes/web.php file:

      Include the following route:

      /var/www/travel_list/routes/web.php

      ...
      
      Route::get('/mysql-test', function () {
        $places = AppPlace::all();
        $results = DB::select( DB::raw("SHOW VARIABLES LIKE 'server_id'") );
      
        return "Server ID: " . $results[0]->Value;
      });
      

      Now go to your browser and access the /mysql-test application route:

      http://server_domain_or_IP/mysql-test
      

      You’ll see a page like this:

      mysql node test page

      Reload the page a few times and you will notice that the Server ID value changes, indicating that the requests are being randomly distributed between the two read-only nodes.

      Conclusion

      In this guide, we’ve prepared a Laravel 6 application for a highly available and scalable environment. We’ve outsourced the database system to an external managed MySQL service, and we’ve integrated an S3-compatible object storage service into the application to store files uploaded by users. Finally, we’ve seen how to scale up the application’s database by including additional read-only cluster nodes in the app’s configuration file.

      The updated demo application code containing all modifications made in this guide can be found within the 2.1 tag in the application’s repository on Github.

      From here, you can set up a Load Balancer to distribute load and scale your application among multiple nodes. You can also leverage this setup to create a containerized environment to run your application on Docker.



      Source link

      How To Analyze Managed Redis Database Statistics Using the Elastic Stack on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Database monitoring is the continuous process of systematically tracking various metrics that show how the database is performing. By observing performance data, you can gain valuable insights and identify possible bottlenecks, as well as find additional ways of improving database performance. Such systems often implement alerting that notifies administrators when things go wrong. Gathered statistics can be used to not only improve the configuration and workflow of the database, but also those of client applications.

      The benefit of using the Elastic Stack (ELK stack) for monitoring your managed database is its excellent support for searching and the ability to ingest new data very quickly. It does not excel at updating the data, but this trade-off is acceptable for monitoring and logging purposes, where past data is almost never changed. Elasticsearch offers a powerful means of querying the data, which you can use through Kibana to get a better understanding of how the database fares through different time periods. This will allow you to correlate database load with real-life events to gain insight into how the database is being used.

      In this tutorial, you’ll import database metrics, generated by the Redis INFO command, into Elasticsearch via Logstash. This entails configuring Logstash to periodically run the command, parse its output and send it to Elasticsearch for indexing immediately afterward. The imported data can later be analyzed and visualized in Kibana. By the end of the tutorial, you’ll have an automated system pulling in Redis statistics for later analysis.

      Prerequisites

      Step 1 — Installing and Configuring Logstash

      In this section, you will install Logstash and configure it to pull statistics from your Redis database cluster, then parse them to send to Elasticsearch for indexing.

      Start off by installing Logstash with the following command:

      • sudo apt install logstash -y

      Once Logstash is installed, enable the service to automatically start on boot:

      • sudo systemctl enable logstash

      Before configuring Logstash to pull the statistics, let’s see what the data itself looks like. To connect to your Redis database, head over to your Managed Database Control Panel, and under the Connection details panel, select Flags from the dropdown:

      Managed Database Control Panel

      You’ll be shown a preconfigured command for the Redli client, which you’ll use to connect to your database. Click Copy and run the following command on your server, replacing redli_flags_command with the command you have just copied:

      Since the output from this command is long, we’ll explain this broken down into its different sections:

      In the output of the Redis info command, sections are marked with #, which signifies a comment. The values are populated in the form of key:value, which makes them relatively easy to parse.

      Output

      # Server redis_version:5.0.4 redis_git_sha1:ab60b2b1 redis_git_dirty:1 redis_build_id:7909f4de3561dc50 redis_mode:standalone os:Linux 5.2.14-200.fc30.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:9.1.1 process_id:72 run_id:ddb7b96c93bbd0c369c6d06ce1c02c78902e13cc tcp_port:25060 uptime_in_seconds:1733 uptime_in_days:0 hz:10 configured_hz:10 lru_clock:8687593 executable:/usr/bin/redis-server config_file:/etc/redis.conf # Clients connected_clients:3 client_recent_max_input_buffer:2 client_recent_max_output_buffer:0 blocked_clients:0 . . .

      The Server section contains technical information about the Redis build, such as its version and the Git commit it’s based on. While the Clients section provides the number of currently opened connections.

      Output

      . . . # Memory used_memory:941560 used_memory_human:919.49K used_memory_rss:4931584 used_memory_rss_human:4.70M used_memory_peak:941560 used_memory_peak_human:919.49K used_memory_peak_perc:100.00% used_memory_overhead:912190 used_memory_startup:795880 used_memory_dataset:29370 used_memory_dataset_perc:20.16% allocator_allocated:949568 allocator_active:1269760 allocator_resident:3592192 total_system_memory:1030356992 total_system_memory_human:982.62M used_memory_lua:37888 used_memory_lua_human:37.00K used_memory_scripts:0 used_memory_scripts_human:0B number_of_cached_scripts:0 maxmemory:463470592 maxmemory_human:442.00M maxmemory_policy:allkeys-lru allocator_frag_ratio:1.34 allocator_frag_bytes:320192 allocator_rss_ratio:2.83 allocator_rss_bytes:2322432 rss_overhead_ratio:1.37 rss_overhead_bytes:1339392 mem_fragmentation_ratio:5.89 mem_fragmentation_bytes:4093872 mem_not_counted_for_evict:0 mem_replication_backlog:0 mem_clients_slaves:0 mem_clients_normal:116310 mem_aof_buffer:0 mem_allocator:jemalloc-5.1.0 active_defrag_running:0 lazyfree_pending_objects:0 . . .

      Here Memory confirms how much RAM Redis has allocated for itself, as well as the maximum amount of memory it can possibly use. If it starts running out of memory, it will free up keys using the strategy you specified in the Control Panel (shown in the maxmemory_policy field in this output).

      Output

      . . . # Persistence loading:0 rdb_changes_since_last_save:0 rdb_bgsave_in_progress:0 rdb_last_save_time:1568966978 rdb_last_bgsave_status:ok rdb_last_bgsave_time_sec:0 rdb_current_bgsave_time_sec:-1 rdb_last_cow_size:217088 aof_enabled:0 aof_rewrite_in_progress:0 aof_rewrite_scheduled:0 aof_last_rewrite_time_sec:-1 aof_current_rewrite_time_sec:-1 aof_last_bgrewrite_status:ok aof_last_write_status:ok aof_last_cow_size:0 # Stats total_connections_received:213 total_commands_processed:2340 instantaneous_ops_per_sec:1 total_net_input_bytes:39205 total_net_output_bytes:776988 instantaneous_input_kbps:0.02 instantaneous_output_kbps:2.01 rejected_connections:0 sync_full:0 sync_partial_ok:0 sync_partial_err:0 expired_keys:0 expired_stale_perc:0.00 expired_time_cap_reached_count:0 evicted_keys:0 keyspace_hits:0 keyspace_misses:0 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:353 migrate_cached_sockets:0 slave_expires_tracked_keys:0 active_defrag_hits:0 active_defrag_misses:0 active_defrag_key_hits:0 active_defrag_key_misses:0 . . .

      In the Persistence section, you can see the last time Redis saved the keys it stores to disk, and if it was successful. The Stats section provides numbers related to client and in-cluster connections, the number of times the requested key was (or wasn’t) found, and so on.

      Output

      . . . # Replication role:master connected_slaves:0 master_replid:9c1d345a46d29d08537981c4fc44e312a21a160b master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:46137344 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 . . .

      Note: The Redis project uses the terms “master” and “slave” in its documentation and in various commands. DigitalOcean generally prefers the alternative terms “primary” and “replica.”
      This guide will default to the terms “primary” and “replica” whenever possible, but note that there are a few instances where the terms “master” and “slave” unavoidably come up.

      By looking at the role under Replication, you’ll know if you’re connected to a primary or replica node. The rest of the section provides the number of currently connected replicas and the amount of data that the replica is lacking in regards to the primary. There may be additional fields if the instance you are connected to is a replica.

      Output

      . . . # CPU used_cpu_sys:1.972003 used_cpu_user:1.765318 used_cpu_sys_children:0.000000 used_cpu_user_children:0.001707 # Cluster cluster_enabled:0 # Keyspace

      Under CPU, you’ll see the amount of system (used_cpu_sys) and user (used_cpu_user) CPU Redis is consuming at the moment. The Cluster section contains only one unique field, cluster_enabled, which serves to indicate that the Redis cluster is running.

      Logstash will be tasked to periodically run the info command on your Redis database (similar to how you just did), parse the results, and send them to Elasticsearch. You’ll then be able to access them later from Kibana.

      You’ll store the configuration for indexing Redis statistics in Elasticsearch in a file named redis.conf under the /etc/logstash/conf.d directory, where Logstash stores configuration files. When started as a service, it will automatically run them in the background.

      Create redis.conf using your favorite editor (for example, nano):

      • sudo nano /etc/logstash/conf.d/redis.conf

      Add the following lines:

      /etc/logstash/conf.d/redis.conf

      input {
          exec {
              command => "redis_flags_command info"
              interval => 10
              type => "redis_info"
          }
      }
      
      filter {
          kv {
              value_split => ":"
              field_split => "rn"
              remove_field => [ "command", "message" ]
          }
      
          ruby {
              code =>
              "
              event.to_hash.keys.each { |k|
                  if event.get(k).to_i.to_s == event.get(k) # is integer?
                      event.set(k, event.get(k).to_i) # convert to integer
                  end
                  if event.get(k).to_f.to_s == event.get(k) # is float?
                      event.set(k, event.get(k).to_f) # convert to float
                  end
              }
              puts 'Ruby filter finished'
              "
          }
      }
      
      output {
          elasticsearch {
              hosts => "http://localhost:9200"
              index => "%{type}"
          }
      }
      

      Remember to replace redis_flags_command with the command shown in the control panel that you used earlier in the step.

      You define an input, which is a set of filters that will run on the collected data, and an output that will send the filtered data to Elasticsearch. The input consists of the exec command, which will run a command on the server periodically, after a set time interval (expressed in seconds). It also specifies a type parameter that defines the document type when indexed in Elasticsearch. The exec block passes down an object containing two fields, command and message string. The command field will contain the command that was run, and the message will contain its output.

      There are two filters that will run sequentially on the data collected from the input. The kv filter stands for key-value filter, and is built-in to Logstash. It is used for parsing data in the general form of keyvalue_separatorvalue and provides parameters for specifying what are considered a value and field separators. The field separator pertains to strings that separate the data formatted in the general form from each other. In the case of the output of the Redis INFO command, the field separator (field_split) is a new line, and the value separator (value_split) is :. Lines that do not follow the defined form will be discarded, including comments.

      To configure the kv filter, you pass : to thevalue_split parameter, and rn (signifying a new line) to the field_split parameter. You also order it to remove the command and message fields from the current data object by passing them to remove_field as elements of an array, because they contain data that are now useless.

      The kv filter represents the value it parsed as a string (text) type by design. This raises an issue because Kibana can’t easily process string types, even if it’s actually a number. To solve this, you’ll use custom Ruby code to convert the number-only strings to numbers, where possible. The second filter is a ruby block that provides a code parameter accepting a string containing the code to be run.

      event is a variable that Logstash provides to your code, and contains the current data in the filter pipeline. As was noted before, filters run one after another, meaning that the Ruby filter will receive the parsed data from the kv filter. The Ruby code itself converts the event to a Hash and traverses through the keys, then checks if the value associated with the key could be represented as an integer or as a float (a number with decimals). If it can, the string value is replaced with the parsed number. When the loop finishes, it prints out a message (Ruby filter finished) to report progress.

      The output sends the processed data to Elasticsearch for indexing. The resulting document will be stored in the redis_info index, defined in the input and passed in as a parameter to the output block.

      Save and close the file.

      You’ve installed Logstash using apt and configured it to periodically request statistics from Redis, process them, and send them to your Elasticsearch instance.

      Step 2 — Testing the Logstash Configuration

      Now you’ll test the configuration by running Logstash to verify it will properly pull the data.

      Logstash supports running a specific configuration by passing its file path to the -f parameter. Run the following command to test your new configuration from the last step:

      • sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

      It may take some time to show the output, but you’ll soon see something similar to the following:

      Output

      WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-09-20 11:59:53.440 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-09-20 11:59:53.459 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.3"} [INFO ] 2019-09-20 12:00:02.543 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [INFO ] 2019-09-20 12:00:03.331 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-09-20 12:00:03.727 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-09-20 12:00:04.015 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6} [WARN ] 2019-09-20 12:00:04.020 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6} [INFO ] 2019-09-20 12:00:04.071 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-09-20 12:00:04.100 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template [INFO ] 2019-09-20 12:00:04.146 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [INFO ] 2019-09-20 12:00:04.295 [[main]-pipeline-manager] exec - Registering Exec Input {:type=>"redis_info", :command=>"...", :interval=>10, :schedule=>nil} [INFO ] 2019-09-20 12:00:04.315 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x73adceba run>"} [INFO ] 2019-09-20 12:00:04.483 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-09-20 12:00:05.318 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} Ruby filter finished Ruby filter finished Ruby filter finished ...

      You’ll see the Ruby filter finished message being printed at regular intervals (set to 10 seconds in the previous step), which means that the statistics are being shipped to Elasticsearch.

      You can exit Logstash by clicking CTRL + C on your keyboard. As previously mentioned, Logstash will automatically run all config files found under /etc/logstash/conf.d in the background when started as a service. Run the following command to start it:

      • sudo systemctl start logstash

      You’ve run Logstash to check if it can connect to your Redis cluster and gather data. Next, you’ll explore some of the statistical data in Kibana.

      Step 3 — Exploring Imported Data in Kibana

      In this section, you’ll explore and visualize the statistical data describing your database’s performance in Kibana.

      In your web browser, navigate to your domain where you exposed Kibana as a part of the prerequisite. You’ll see the default welcome page:

      Kibana - Welcome Page

      Before exploring the data Logstash is sending to Elasticsearch, you’ll first need to add the redis_info index to Kibana. To do so, click on Management from the left-hand vertical sidebar, and then on Index Patterns under the Kibana section.

      Kibana - Index Pattern Creation

      You’ll see a form for creating a new Index Pattern. Index Patterns in Kibana provide a way to pull in data from multiple Elasticsearch indexes at once, and can be used to explore only one index.

      Beneath the Index pattern text field, you’ll see the redis_info index listed. Type it in the text field and then click on the Next step button.

      You’ll then be asked to choose a timestamp field, so you’ll later be able to narrow your searches by a time range. Logstash automatically adds one, called @timestamp. Select it from the dropdown and click on Create index pattern to finish adding the index to Kibana.

      Kibana - Index Pattern Timestamp Selection

      To create and see existing visualizations, click on the Visualize item in the left-hand vertical menu. You’ll see the following page:

      Kibana - Visualizations

      To create a new visualization, click on the Create a visualization button, then select Line from the list of types that will pop up. Then, select the redis_info* index pattern you have just created as the data source. You’ll see an empty visualization:

      Kibana - Empty Visualization

      The left-side panel provides a form for editing parameters that Kibana will use to draw the visualization, which will be shown on the central part of the screen. On the upper-right hand side of the screen is the date range picker. If the @timestamp field is being used in the visualization, Kibana will only show the data belonging to the time interval specified in the range picker.

      You’ll now visualize the average Redis memory usage during a specified time interval. Click on Y-Axis under Metrics in the panel on the left to unfold it, then select Average as the Aggregation and select used_memory as the Field. This will populate the Y axis of the plot with the average values.

      Next, click on X-Axis under Buckets. For the Aggregation, choose Date Histogram. @timestamp should be automatically selected as the Field. Then, show the visualization by clicking on the blue play button on the top of the panel. If your database is brand new and not used you won’t see a very long line. In all cases, however, you will see an accurate portrayal of average memory usage. Here is how the resulting visualization may look after little to no usage:

      Kibana - Redis Memory Usage Visualization

      In this step, you have visualized memory usage of your managed Redis database, using Kibana. You can also use other plot types Kibana offers, such as the Visual Builder, to create more complicated graphs that portray more than one field at the same time. This will allow you to gain a better understanding of how your database is being used, which will help you optimize client applications, as well as your database itself.

      Conclusion

      You now have the Elastic stack installed on your server and configured to pull statistics data from your managed Redis database on a regular basis. You can analyze and visualize the data using Kibana, or some other suitable software, which will help you gather valuable insights and real-world correlations into how your database is performing.

      For more information about what you can do with your Redis Managed Database, visit the product docs. If you’d like to present the database statistics using another visualization type, check out the Kibana docs for further instructions.



      Source link