One place for hosting & domains

      Database

      How To Analyze Managed PostgreSQL Database Statistics Using the Elastic Stack on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Database monitoring is the continuous process of systematically tracking various metrics that show how the database is performing. By observing the performance data, you can gain valuable insights and identify possible bottlenecks, as well as find additional ways of improving database performance. Such systems often implement alerting, which notifies administrators when things go wrong. Gathered statistics can be used to not only improve the configuration and workflow of the database, but also those of client applications.

      The benefit of using the Elastic Stack (ELK stack) for monitoring your managed database is its excellent support for searching and the ability to ingest new data very quickly. It does not excel at updating the data, but this trade off is acceptable for monitoring and logging purposes, where past data is almost never changed. Elasticsearch offers powerful means of querying the data, which you can use through Kibana to get a better understanding of how the database fares through different time periods. This will allow you to correlate database load with real-life events to gain insight into how the database is being used.

      In this tutorial, you’ll import database metrics, generated by the PostgreSQL statistics collector, into Elasticsearch via Logstash. This entails configuring Logstash to pull data from the database using the PostgreSQL JDBC connector to send it to Elasticsearch for indexing immediately afterward. The imported data can later be analyzed and visualized in Kibana. Then, if your database is brand new, you’ll use pgbench, a PostgreSQL benchmarking tool, to create more interesting visualizations. In the end, you’ll have an automated system pulling in PostgreSQL statistics for later analysis.

      Prerequisites

      Step 1 — Setting up Logstash and the PostgreSQL JDBC Driver

      In this section, you will install Logstash and download the PostgreSQL JDBC driver so that Logstash will be able to connect to your managed database.

      Start off by installing Logstash with the following command:

      • sudo apt install logstash -y

      Once Logstash is installed, enable the service to automatically start on boot:

      • sudo systemctl enable logstash

      Logstash is written in Java, so in order to connect to PostgreSQL it requires the PostgreSQL JDBC (Java Database Connectivity) library to be available on the system it is running on. Because of an internal limitation, Logstash will properly load the library only if it is found under the /usr/share/logstash/logstash-core/lib/jars directory, where it stores third-party libraries it uses.

      Head over to the download page of the JDBC library and copy the link to latest version. Then, download it using curl by running the following command:

      • sudo curl https://jdbc.postgresql.org/download/postgresql-42.2.6.jar -o /usr/share/logstash/logstash-core/lib/jars/postgresql-jdbc.jar

      At the time of writing, the latest version of the library was 42.2.6, with Java 8 as the supported runtime version. Ensure you download the latest version; pairing it with the correct Java version that both JDBC and Logstash support.

      Logstash stores its configuration files under /etc/logstash/conf.d, and is itself stored under /usr/share/logstash/bin. Before you create a configuration that will pull statistics from your database, you’ll need to enable the JDBC plugin in Logstash by running the following command:

      • sudo /usr/share/logstash/bin/logstash-plugin install logstash-input-jdbc

      You’ve installed Logstash using apt and downloaded the PostgreSQL JDBC library so that Logstash can use it to connect to your managed database. In the next step, you will configure Logstash to pull statistical data from it.

      Step 2 — Configuring Logstash To Pull Statistics

      In this section, you will configure Logstash to pull metrics from your managed PostgreSQL database.

      You’ll configure Logstash to watch over three system databases in PostgreSQL, namely:

      • pg_stat_database: provides statistics about each database, including its name, number of connections, transactions, rollbacks, rows returned by querying the database, deadlocks, and so on. It has a stats_reset field, which specifies when the statistics were last reset.
      • pg_stat_user_tables: provides statistics about each table created by the user, such as the number of inserted, deleted, and updated rows.
      • pg_stat_user_indexes: collects data about all indexes in user-created tables, such as the number of times a particular index has been scanned.

      You’ll store the configuration for indexing PostgreSQL statistics in Elasticsearch in a file named postgresql.conf under the /etc/logstash/conf.d directory, where Logstash stores configuration files. When started as a service, it will automatically run them in the background.

      Create postgresql.conf using your favorite editor (for example, nano):

      • sudo nano /etc/logstash/conf.d/postgresql.conf

      Add the following lines:

      /etc/logstash/conf.d/postgresql.conf

      input {
              # pg_stat_database
              jdbc {
                      jdbc_driver_library => ""
                      jdbc_driver_class => "org.postgresql.Driver"
                      jdbc_connection_string => "jdbc:postgresql://host:port/defaultdb"
                      jdbc_user => "username"
                      jdbc_password => "password"
                      statement => "SELECT * FROM pg_stat_database"
                      schedule => "* * * * *"
                      type => "pg_stat_database"
              }
      
              # pg_stat_user_tables
              jdbc {
                      jdbc_driver_library => ""
                      jdbc_driver_class => "org.postgresql.Driver"
                      jdbc_connection_string => "jdbc:postgresql://host:port/defaultdb"
                      jdbc_user => "username"
                      jdbc_password => "password"
                      statement => "SELECT * FROM pg_stat_user_tables"
                      schedule => "* * * * *"
                      type => "pg_stat_user_tables"
              }
      
              # pg_stat_user_indexes
              jdbc {
                      jdbc_driver_library => ""
                      jdbc_driver_class => "org.postgresql.Driver"
                      jdbc_connection_string => "jdbc:postgresql://host:port/defaultdb"
                      jdbc_user => "username"
                      jdbc_password => "password"
                      statement => "SELECT * FROM pg_stat_user_indexes"
                      schedule => "* * * * *"
                      type => "pg_stat_user_indexes"
              }
      }
      
      output {
              elasticsearch {
                      hosts => "http://localhost:9200"
                      index => "%{type}"
              }
      }
      

      Remember to replace host with your host address, port with the port to which you can connect to your database, username with the database user username, and password with its password. All these values can be found in the Control Panel of your managed database.

      In this configuration, you define three JDBC inputs and one Elasticsearch output. The three inputs pull data from the pg_stat_database, pg_stat_user_tables, and pg_stat_user_indexes databases, respectively. They all set the jdbc_driver_library parameter to an empty string, because the PostgreSQL JDBC library is in a folder that Logstash automatically loads.

      Then, they set the jdbc_driver_class, whose value is specific to the JDBC library, and provide a jdbc_connection_string, which details how to connect to the database. The jdbc: part signifies that it is a JDBC connection, while postgres:// indicates that the target database is PostgreSQL. Next come the host and port of the database, and after the forward slash you also specify a database to connect to; this is because PostgreSQL requires you to be connected to a database to be able to issue any queries. Here, it is set to the default database that always exists and can not be deleted, aptly named defaultdb.

      Next, they set a username and password of the user through which the database will be accessed. The statement parameter contains a SQL query that should return the data you wish to process—in this configuration, it selects all rows from the appropriate database.

      The schedule parameter accepts a string in cron syntax that defines when Logstash should run this input; omitting it completely will make Logstash run it only once. Specifying * * * * *, as you have done so here, will tell Logstash to run it every minute. You can specify your own cron string if you want to collect data at different intervals.

      There is only one output, which accepts data from three inputs. They all send data to Elasticsearch, which is running locally and is reachable at http://localhost:9200. The index parameter defines to which Elasticsearch index it will send the data, and its value is passed in from the type field of the input.

      When you are done with editing, save and close the file.

      You’ve configured Logstash to gather data from various PostgreSQL statistical tables and send them to Elasticsearch for storage and indexing. Next, you’ll run Logstash to test the configuration.

      Step 3 — Testing the Logstash Configuration

      In this section, you will test the configuration by running Logstash to verify it will properly pull the data. Then, you will make this configuration run in the background by configuring it as a Logstash pipeline.

      Logstash supports running a specific configuration by passing its file path to the -f parameter. Run the following command to test your new configuration from the last step:

      • sudo /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/postgresql.conf

      It may take some time before it shows any output, which will look similar to this:

      Output

      Thread.exclusive is deprecated, use Thread::Mutex WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2019-08-02 18:29:15.123 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2019-08-02 18:29:15.154 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.3.0"} [INFO ] 2019-08-02 18:29:18.209 [Converge PipelineAction::Create<main>] Reflections - Reflections took 77 ms to scan 1 urls, producing 19 keys and 39 values [INFO ] 2019-08-02 18:29:20.195 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-08-02 18:29:20.667 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-08-02 18:29:21.221 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7} [WARN ] 2019-08-02 18:29:21.230 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [INFO ] 2019-08-02 18:29:21.274 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-08-02 18:29:21.337 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-08-02 18:29:21.369 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-08-02 18:29:21.386 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7} [WARN ] 2019-08-02 18:29:21.386 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [INFO ] 2019-08-02 18:29:21.409 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-08-02 18:29:21.430 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [WARN ] 2019-08-02 18:29:21.444 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://localhost:9200/"} [INFO ] 2019-08-02 18:29:21.465 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7} [WARN ] 2019-08-02 18:29:21.466 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [INFO ] 2019-08-02 18:29:21.468 [Ruby-0-Thread-7: :1] elasticsearch - Using default mapping template [INFO ] 2019-08-02 18:29:21.538 [Ruby-0-Thread-5: :1] elasticsearch - Using default mapping template [INFO ] 2019-08-02 18:29:21.545 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [INFO ] 2019-08-02 18:29:21.589 [Ruby-0-Thread-9: :1] elasticsearch - Using default mapping template [INFO ] 2019-08-02 18:29:21.696 [Ruby-0-Thread-5: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [INFO ] 2019-08-02 18:29:21.769 [Ruby-0-Thread-7: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [INFO ] 2019-08-02 18:29:21.771 [Ruby-0-Thread-9: :1] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [WARN ] 2019-08-02 18:29:21.871 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team. [INFO ] 2019-08-02 18:29:21.878 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, :thread=>"#<Thread:0x470bf1ca run>"} [INFO ] 2019-08-02 18:29:22.351 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2019-08-02 18:29:22.721 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2019-08-02 18:29:23.798 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/cronline.rb:77: warning: constant ::Fixnum is deprecated [INFO ] 2019-08-02 18:30:02.333 [Ruby-0-Thread-22: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:284] jdbc - (0.042932s) SELECT * FROM pg_stat_user_indexes [INFO ] 2019-08-02 18:30:02.340 [Ruby-0-Thread-23: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:331] jdbc - (0.043178s) SELECT * FROM pg_stat_user_tables [INFO ] 2019-08-02 18:30:02.340 [Ruby-0-Thread-24: :1] jdbc - (0.036469s) SELECT * FROM pg_stat_database ...

      If Logstash does not show any errors and logs that it has successfully SELECTed rows from the three databases, your database metrics will be shipped to Elasticsearch. If you get an error, double check all the values in the configuration file to ensure that the machine you’re running Logstash on can connect to the managed database.

      Logstash will continue importing the data at specified times. You can safely stop it by pressing CTRL+C.

      As previously mentioned, when started as a service, Logstash automatically runs all configuration files it finds under /etc/logstash/conf.d in the background. Run the following command to start it as a service:

      • sudo systemctl start logstash

      In this step, you ran Logstash to check if it can connect to your database and gather data. Next, you’ll visualize and explore some of the statistical data in Kibana.

      Step 4 — Exploring Imported Data in Kibana

      In this section, you’ll see how you can explore the statistical data describing your database’s performance in Kibana.

      In your browser, navigate to the Kibana installation you set up as a prerequisite. You’ll see the default welcome page.

      Kibana - Default Welcome Page

      To interact with Elasticsearch indexes in Kibana, you’ll need to create an index pattern. Index patterns specify on which indexes Kibana should operate. To create one, press on the last icon (wrench) from the left-hand vertical sidebar to open the Management page. Then, from the left menu, press on Index Patterns under Kibana. You’ll see a dialog box for creating an index pattern.

      Kibana - Add Index Pattern

      Listed are the three indexes where Logstash has been sending statistics. Type in pg_stat_database in the Index Pattern input box and then press Next step. You’ll be asked to select a field that stores time, so you’ll be able to later narrow your data by a time range. From the dropdown, select @timestamp.

      Kibana - Index Pattern Timestamp Field

      Press on Create index pattern to finish creating the index pattern. You’ll now be able to explore it using Kibana. To create a visualization, press on the second icon in the sidebar, and then on Create new visualization. Select the Line visualization when the form pops up, and then choose the index pattern you have just created (pg_stat_database). You’ll see an empty visualization.

      Kibana - Empty Visualisation

      On the central part of the screen is the resulting plot—the left-side panel governs its generation from which you can set the data for X and Y axis. In the upper right-hand side of the screen is the date range picker. Unless you specifically choose another range when configuring the data, that range will be shown on the plot.

      You’ll now visualize the average number of data tuples INSERTed on minutes in the given interval. Press on Y-Axis under Metrics in the panel on the left to unfold it. Select Average as the Aggregation and select tup_inserted as the Field. This will populate the Y axis of the plot with the average values.

      Next, press on X-Axis under Buckets. For the Aggregation, choose Date Histogram. @timestamp should be automatically selected as the Field. Then, press on the blue play button on the top of the panel to generate your graph. If your database is brand new and not used, you won’t see anything yet. In all cases, however, you will see an accurate portrayal of database usage.

      Kibana supports many other visualization forms—you can explore other forms in the Kibana documentation. You can also add the two remaining indexes, mentioned in Step 2, into Kibana to be able to visualize them as well.

      In this step, you have learned how to visualize some of the PostgreSQL statistical data, using Kibana.

      Step 5 — (Optional) Benchmarking Using pgbench

      If you haven’t yet worked in your database outside of this tutorial, you can complete this step to create more interesting visualizations by using pgbench to benchmark your database. pgbench will run the same SQL commands over and over, simulating real-world database use by an actual client.

      You’ll first need to install pgbench by running the following command:

      • sudo apt install postgresql-contrib -y

      Because pgbench will insert and update test data, you’ll need to create a separate database for it. To do so, head over to the Users & Databases tab in the Control Panel of your managed database, and scroll down to the Databases section. Type in pgbench as the name of the new database, and then press on Save. You’ll pass this name, as well as the host, port, and username information to pgbench.

      Accessing Databases section in DO control panel

      Before actually running pgbench, you’ll need to run it with the -i flag to initialize its database:

      • pgbench -h host -p port -U username -i pgbench

      You’ll need to replace host with your host address, port with the port to which you can connect to your database, and username with the database user username. You can find all these values in the Control Panel of your managed database.

      Notice that pgbench does not have a password argument; instead, you’ll be asked for it every time you run it.

      The output will look like the following:

      Output

      NOTICE: table "pgbench_history" does not exist, skipping NOTICE: table "pgbench_tellers" does not exist, skipping NOTICE: table "pgbench_accounts" does not exist, skipping NOTICE: table "pgbench_branches" does not exist, skipping creating tables... 100000 of 100000 tuples (100%) done (elapsed 0.16 s, remaining 0.00 s) vacuum... set primary keys... done.

      pgbench created four tables, which it will use for benchmarking, and populated them with some example rows. You’ll now be able to run benchmarks.

      The two most important arguments that limit for how long the benchmark will run are -t, which specifies the number of transactions to complete, and -T, which defines for how many seconds the benchmark should run. These two options are mutually exclusive. At the end of each benchmark, you’ll receive statistics, such as the number of transactions per second (tps).

      Now, start a benchmark that will last for 30 seconds by running the following command:

      • pgbench -h host -p port -U username pgbench -T 30

      The output will look like:

      Output

      starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 1 query mode: simple number of clients: 1 number of threads: 1 duration: 30 s number of transactions actually processed: 7602 latency average = 3.947 ms tps = 253.382298 (including connections establishing) tps = 253.535257 (excluding connections establishing)

      In this output, you see the general info about the benchmark, such as the total number of transactions executed. The effect of these benchmarks is that the statistics Logstash ships to Elasticsearch will reflect that number, which will in turn make visualizations in Kibana more interesting and closer to real-world graphs. You can run the preceding command a few more times, and possibly alter the duration.

      When you are done, head over to Kibana and press on Refresh in the upper right corner. You’ll now see a different line than before, which shows the number of INSERTs. Feel free to change the time range of the data shown by changing the values in the picker positioned above the refresh button. Here is how the graph may look after multiple benchmarks of varying duration:

      Kibana - Visualization After Benchmarks

      You’ve used pgbench to benchmark your database, and evaluated the resulting graphs in Kibana.

      Conclusion

      You now have the Elastic stack installed on your server and configured to pull statistics data from your managed PostgreSQL database on a regular basis. You can analyze and visualize the data using Kibana, or some other suitable software, which will help you gather valuable insights and real-world correlations into how your database is performing.

      For more information about what you can do with your PostgreSQL Managed Database, visit the product docs.



      Source link

      How To Monitor Your Managed PostgreSQL Database Using Nagios Core on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Database monitoring is key to understanding how a database performs over time. It can help you uncover hidden usage problems and bottlenecks happening in your database. Implementing database monitoring systems can quickly turn out to be a long-term advantage, which will positively influence your infrastructure management process. You’ll be able to swiftly react to status changes of your database and will quickly be notified when monitored services return to normal functioning.

      Nagios Core is a popular monitoring system that you can use to monitor your managed database. The benefits of using Nagios for this task are its versatility—it’s easy to configure and use—a large repository of available plugins, and most importantly, integrated alerting.

      In this tutorial, you will set up PostgreSQL database monitoring in Nagios Core using the check_postgres Nagios plugin and set up Slack-based alerting. In the end, you’ll have a monitoring system in place for your managed PostgreSQL database, and will be notified of status changes of various functionality immediately.

      Prerequisites

      • An Ubuntu 18.04 server with root privileges, and a secondary, non-root account. You can set this up by following this initial server setup guide. For this tutorial the non-root user is sammy.

      • Nagios Core installed on your server. To achieve this, complete the first five steps of the How To Install Nagios 4 and Monitor Your Servers on Ubuntu 18.04 tutorial.

      • A DigitalOcean account and a PostgreSQL managed database provisioned from DigitalOcean with connection information available. Make sure that your server’s IP address is on the whitelist. To learn more about DigitalOcean Managed Databases, visit the product docs.

      • A Slack account with full access, added to a workspace where you’ll want to receive status updates.

      Step 1 — Installing check_postgres

      In this section, you’ll download the latest version of the check_postgres plugin from Github and make it available to Nagios Core. You’ll also install the PostgreSQL client (psql), so that check_postgres will be able to connect to your managed database.

      Start off by installing the PostgreSQL client by running the following command:

      • sudo apt install postgresql-client

      Next, you’ll download check_postgres to your home directory. First, navigate to it:

      Head over to the Github releases page and copy the link of the latest version of the plugin. At the time of writing, the latest version of check_postgres was 2.24.0; keep in mind that this will update, and where possible it's best practice to use the latest version.

      Now download it using curl:

      • curl -LO https://github.com/bucardo/check_postgres/releases/download/2.24.0/check_postgres-2.24.0.tar.gz

      Extract it using the following command:

      • tar xvf check_postgres-*.tar.gz

      This will create a directory with the same name as the file you have downloaded. That folder contains the check_postgres executable, which you'll need to copy to the directory where Nagios stores its plugins (usually /usr/local/nagios/libexec/). Copy it by running the following command:

      • sudo cp check_postgres-*/check_postgres.pl /usr/local/nagios/libexec/

      Next, you'll need to give the nagios user ownership of it, so that it can be run from Nagios:

      • sudo chown nagios:nagios /usr/local/nagios/libexec/check_postgres.pl

      check_postgres is now available to Nagios and can be used from it. However, it provides a lot of commands pertaining to different aspects of PostgreSQL, and for better service maintainability, it's better to break them up so that they can be called separately. You'll achieve this by creating a symlink to every check_postgres command in the plugin directory.

      Navigate to the directory where Nagios stores plugins by running the following command:

      • cd /usr/local/nagios/libexec

      Then, create the symlinks with:

      • sudo perl check_postgres.pl --symlinks

      The output will look like this:

      Output

      Created "check_postgres_archive_ready" Created "check_postgres_autovac_freeze" Created "check_postgres_backends" Created "check_postgres_bloat" Created "check_postgres_checkpoint" Created "check_postgres_cluster_id" Created "check_postgres_commitratio" Created "check_postgres_connection" Created "check_postgres_custom_query" Created "check_postgres_database_size" Created "check_postgres_dbstats" Created "check_postgres_disabled_triggers" Created "check_postgres_disk_space" Created "check_postgres_fsm_pages" Created "check_postgres_fsm_relations" Created "check_postgres_hitratio" Created "check_postgres_hot_standby_delay" Created "check_postgres_index_size" Created "check_postgres_indexes_size" Created "check_postgres_last_analyze" Created "check_postgres_last_autoanalyze" Created "check_postgres_last_autovacuum" Created "check_postgres_last_vacuum" Created "check_postgres_listener" Created "check_postgres_locks" Created "check_postgres_logfile" Created "check_postgres_new_version_bc" Created "check_postgres_new_version_box" Created "check_postgres_new_version_cp" Created "check_postgres_new_version_pg" Created "check_postgres_new_version_tnm" Created "check_postgres_pgagent_jobs" Created "check_postgres_pgb_pool_cl_active" Created "check_postgres_pgb_pool_cl_waiting" Created "check_postgres_pgb_pool_maxwait" Created "check_postgres_pgb_pool_sv_active" Created "check_postgres_pgb_pool_sv_idle" Created "check_postgres_pgb_pool_sv_login" Created "check_postgres_pgb_pool_sv_tested" Created "check_postgres_pgb_pool_sv_used" Created "check_postgres_pgbouncer_backends" Created "check_postgres_pgbouncer_checksum" Created "check_postgres_prepared_txns" Created "check_postgres_query_runtime" Created "check_postgres_query_time" Created "check_postgres_relation_size" Created "check_postgres_replicate_row" Created "check_postgres_replication_slots" Created "check_postgres_same_schema" Created "check_postgres_sequence" Created "check_postgres_settings_checksum" Created "check_postgres_slony_status" Created "check_postgres_table_size" Created "check_postgres_timesync" Created "check_postgres_total_relation_size" Created "check_postgres_txn_idle" Created "check_postgres_txn_time" Created "check_postgres_txn_wraparound" Created "check_postgres_version" Created "check_postgres_wal_files"

      Perl listed all the functions it created a symlink for. These can now be executed from the command line as usual.

      You've downloaded and installed the check_postgres plugin. You have also created symlinks to all the commands of the plugin, so that they can be used individually from Nagios. In the next step, you'll create a connection service file, which check_postgres will use to connect to your managed database.

      Step 2 — Configuring Your Database

      In this section, you will create a PostgreSQL connection service file containing the connection information of your database. Then, you will test the connection data by invoking check_postgres on it.

      The connection service file is by convention called pg_service.conf, and must be located under /etc/postgresql-common/. Create it for editing with your favorite editor (for example, nano):

      • sudo nano /etc/postgresql-common/pg_service.conf

      Add the following lines, replacing the highlighted placeholders with the actual values shown in your Managed Database Control Panel under the section Connection Details:

      /etc/postgresql-common/pg_service.conf

      [managed-db]
      host=host
      port=port
      user=username
      password=password
      dbname=defaultdb
      sslmode=require
      

      The connection service file can house multiple database connection info groups. The beginning of a group is signaled by putting its name in square brackets. After that comes the connection parameters (host, port, user, password, and so on), separated by new lines, which must be given a value.

      Save and close the file when you are finished.

      You'll now test the validity of the configuration by connecting to the database via check_postgres by running the following command:

      • ./check_postgres.pl --dbservice=managed-db --action=connection

      Here, you tell check_postgres which database connection info group to use with the parameter --dbservice, and also specify that it should only try to connect to it by specifying connection as the action.

      Your output will look similar to this:

      Output

      POSTGRES_CONNECTION OK: service=managed-db version 11.4 | time=0.10s

      This means that check_postgres succeeded in connecting to the database, according to the parameters from pg_service.conf. If you get an error, double check what you have just entered in that config file.

      You've created and filled out a PostgreSQL connection service file, which works as a connection string. You have also tested the connection data by running check_postgres on it and observing the output. In the next step, you will configure Nagios to monitor various parts of your database.

      Step 3 — Creating Monitoring Services in Nagios

      Now you will configure Nagios to watch over various metrics of your database by defining a host and multiple services, which will call the check_postgres plugin and its symlinks.

      Nagios stores your custom configuration files under /usr/local/nagios/etc/objects. New files you add there must be manually enabled in the central Nagios config file, located at /usr/local/nagios/etc/nagios.cfg. You'll now define commands, a host, and multiple services, which you'll use to monitor your managed database in Nagios.

      First, create a folder under /usr/local/nagios/etc/objects to store your PostgreSQL related configuration by running the following command:

      • sudo mkdir /usr/local/nagios/etc/objects/postgresql

      You'll store Nagios commands for check_nagios in a file named commands.cfg. Create it for editing:

      • sudo nano /usr/local/nagios/etc/objects/postgresql/commands.cfg

      Add the following lines:

      /usr/local/nagios/etc/objects/postgresql/commands.cfg

      define command {
          command_name           check_postgres_connection
          command_line           /usr/local/nagios/libexec/check_postgres_connection --dbservice=$ARG1$
      }
      
      define command {
          command_name           check_postgres_database_size
          command_line           /usr/local/nagios/libexec/check_postgres_database_size --dbservice=$ARG1$ --critical='$ARG2$'
      }
      
      define command {
          command_name           check_postgres_locks
          command_line           /usr/local/nagios/libexec/check_postgres_locks --dbservice=$ARG1$
      }
      
      define command {
          command_name           check_postgres_backends
          command_line           /usr/local/nagios/libexec/check_postgres_backends --dbservice=$ARG1$
      }
      

      Save and close the file.

      In this file, you define four Nagios commands that call different parts of the check_postgres plugin (checking connectivity, getting the number of locks and connections, and the size of the whole database). They all accept an argument that is passed to the --dbservice parameter, and specify which of the databases defined in pg_service.conf to connect to.

      The check_postgres_database_size command accepts a second argument that gets passed to the --critical parameter, which specifies the point at which the database storage is becoming full. Accepted values include 1 KB for a kilobyte, 1 MB for a megabyte, and so on, up to exabytes (EB). A number without a capacity unit is treated as being expressed in bytes.

      Now that the necessary commands are defined, you'll define the host (essentially, the database) and its monitoring services in a file named services.cfg. Create it using your favorite editor:

      • sudo nano /usr/local/nagios/etc/objects/postgresql/services.cfg

      Add the following lines, replacing db_max_storage_size with a value pertaining to the available storage of your database. It is recommended to set it to 90 percent of the storage size you have allocated to it:

      /usr/local/nagios/etc/objects/postgresql/services.cfg

      define host {
            use                    linux-server
            host_name              postgres
            check_command          check_postgres_connection!managed-db
      }
      
      define service {
            use                    generic-service
            host_name              postgres
            service_description    PostgreSQL Connection
            check_command          check_postgres_connection!managed-db
            notification_options   w,u,c,r,f,s
      }
      
      define service {
            use                    generic-service
            host_name              postgres
            service_description    PostgreSQL Database Size
            check_command          check_postgres_database_size!managed-db!db_max_storage_size
            notification_options   w,u,c,r,f,s
      }
      
      define service {
            use                    generic-service
            host_name              postgres
            service_description    PostgreSQL Locks
            check_command          check_postgres_locks!managed-db
            notification_options   w,u,c,r,f,s
      }
      
      define service {
            use                    generic-service
            host_name              postgres
            service_description    PostgreSQL Backends
            check_command          check_postgres_backends!managed-db
            notification_options   w,u,c,r,f,s
      }
      

      You first define a host, so that Nagios will know what entity the services relate to. Then, you create four services, which call the commands you just defined. Each one passes managed-db as the argument, detailing that the managed-db you defined in Step 2 should be monitored.

      Regarding notification options, each service specifies that notifications should be sent out when the service state becomes WARNING, UNKNOWN, CRITICAL, OK (when it recovers from downtime), when the service starts flapping, or when scheduled downtime starts or ends. Without explicitly giving this option a value, no notifications would be sent out (to available contacts) at all, except if triggered manually.

      Save and close the file.

      Next, you'll need to explicitly tell Nagios to read config files from this new directory, by editing the general Nagios config file. Open it for editing by running the following command:

      • sudo nano /usr/local/nagios/etc/nagios.cfg

      Find this highlighted line in the file:

      /usr/local/nagios/etc/nagios.cfg

      ...
      # directive as shown below:
      
      cfg_dir=/usr/local/nagios/etc/servers
      #cfg_dir=/usr/local/nagios/etc/printers
      ...
      

      Above it, add the following highlighted line:

      /usr/local/nagios/etc/nagios.cfg

      ...
      cfg_dir=/usr/local/nagios/etc/objects/postgresql
      cfg_dir=/usr/local/nagios/etc/servers
      ...
      

      Save and close the file. This line tells Nagios to load all config files from the /usr/local/nagios/etc/objects/postgresql directory, where your configuration files are located.

      Before restarting Nagios, check the validity of the configuration by running the following command:

      • sudo /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

      The end of the output will look similar to this:

      Output

      Total Warnings: 0 Total Errors: 0 Things look okay - No serious problems were detected during the pre-flight check

      This means that Nagios found no errors in the configuration. If it shows you an error, you'll also see a hint as to what went wrong, so you'll be able to fix the error more easily.

      To make Nagios reload its configuration, restart its service by running the following command:

      • sudo systemctl restart nagios

      You can now navigate to Nagios in your browser. Once it loads, press on the Services option from the left-hand menu. You'll see the postgres host and a list of services, along with their current statuses:

      PostgreSQL Monitoring Services - Pending

      They will all soon turn to green and show an OK status. You'll see the command output under the Status Information column. You can click on the service name and see detailed information about its status and availability.

      You've added check_postgres commands, a host, and multiple services to your Nagios installation to monitor your database. You've also checked that the services are working properly by examining them via the Nagios web interface. In the next step, you will configure Slack-based alerting.

      Step 4 — Configuring Slack Alerting

      In this section, you will configure Nagios to alert you about events via Slack, by posting them into desired channels in your workspace.

      Before you start, log in to your desired workspace on Slack and create two channels where you'll want to receive status messages from Nagios: one for host, and the other one for service notifications. If you wish, you can create only one channel where you'll receive both kinds of alerts.

      Then, head over to the Nagios app in the Slack App Directory and press on Add Configuration. You'll see a page for adding the Nagios Integration.

      Slack - Add Nagios Integration

      Press on Add Nagios Integration. When the page loads, scroll down and take note of the token, because you'll need it further on.

      Slack - Integration Token

      You'll now install and configure the Slack plugin (written in Perl) for Nagios on your server. First, install the required Perl prerequisites by running the following command:

      • sudo apt install libwww-perl libcrypt-ssleay-perl -y

      Then, download the plugin to your Nagios plugin directory:

      • sudo curl https://raw.githubusercontent.com/tinyspeck/services-examples/master/nagios.pl -o slack.pl

      Make it executable by running the following command:

      Now, you'll need to edit it to connect to your workspace using the token you got from Slack. Open it for editing:

      Find the following lines in the file:

      /usr/local/nagios/libexec/slack.pl

      ...
      my $opt_domain = "foo.slack.com"; # Your team's domain
      my $opt_token = "your_token"; # The token from your Nagios services page
      ...
      

      Replace foo.slack.com with your workspace domain and your_token with your Nagios app integration token, then save and close the file. The script will now be able to send proper requests to Slack, which you'll now test by running the following command:

      • ./slack.pl -field slack_channel=#your_channel_name -field HOSTALIAS="Test Host" -field HOSTSTATE="UP" -field HOSTOUTPUT="Host is UP" -field NOTIFICATIONTYPE="RECOVERY"

      Replace your_channel_name with the name of the channel where you'll want to receive status alerts. The script will output information about the HTTP request it made to Slack, and if everything went through correctly, the last line of the output will be ok. If you get an error, double check if the Slack channel you specified exists in the workspace.

      You can now head over to your Slack workspace and select the channel you specified. You'll see a test message coming from Nagios.

      Slack - Nagios Test Message

      This confirms that you have properly configured the Slack script. You'll now move on to configuring Nagios to alert you via Slack using this script.

      You'll need to create a contact for Slack and two commands that will send messages to it. You'll store this config in a file named slack.cfg, in the same folder as the previous config files. Create it for editing by running the following command:

      • sudo nano /usr/local/nagios/etc/objects/postgresql/slack.cfg

      Add the following lines:

      /usr/local/nagios/etc/objects/postgresql/slack.cfg

      define contact {
            contact_name                             slack
            alias                                    Slack
            service_notification_period              24x7
            host_notification_period                 24x7
            service_notification_options             w,u,c,f,s,r
            host_notification_options                d,u,r,f,s
            service_notification_commands            notify-service-by-slack
            host_notification_commands               notify-host-by-slack
      }
      
      define command {
            command_name     notify-service-by-slack
            command_line     /usr/local/nagios/libexec/slack.pl -field slack_channel=#service_alerts_channel
      }
      
      define command {
            command_name     notify-host-by-slack
            command_line     /usr/local/nagios/libexec/slack.pl -field slack_channel=#host_alerts_channel
      }
      

      Here you define a contact named slack, state that it can be contacted anytime and specify which commands to use for notifying service and host related events. Those two commands are defined after it and call the script you have just configured. You'll need to replace service_alerts_channel and host_alerts_channel with the names of the channels where you want to receive service and host messages, respectively. If preferred, you can use the same channel names.

      Similarly to the service creation in the last step, setting service and host notification options on the contact is crucial, because it governs what kind of alerts the contact will receive. Omitting those options would result in sending out notifications only when manually triggered from the web interface.

      When you are done with editing, save and close the file.

      To enable alerting via the slack contact you just defined, you'll need to add it to the admin contact group, defined in the contacts.cfg config file, located under /usr/local/nagios/etc/objects/. Open it for editing by running the following command:

      • sudo nano /usr/local/nagios/etc/objects/contacts.cfg

      Find the config block that looks like this:

      /usr/local/nagios/etc/objects/contacts.cfg

      define contactgroup {
      
          contactgroup_name       admins
          alias                   Nagios Administrators
          members                 nagiosadmin
      }
      

      Add slack to the list of members, like so:

      /usr/local/nagios/etc/objects/contacts.cfg

      define contactgroup {
      
          contactgroup_name       admins
          alias                   Nagios Administrators
          members                 nagiosadmin,slack
      }
      

      Save and close the file.

      By default when running scripts, Nagios does not make host and service information available via environment variables, which is what the Slack script requires in order to send meaningful messages. To remedy this, you'll need to set the enable_environment_macros setting in nagios.cfg to 1. Open it for editing by running the following command:

      • sudo nano /usr/local/nagios/etc/nagios.cfg

      Find the line that looks like this:

      /usr/local/nagios/etc/nagios.cfg

      enable_environment_macros=0
      

      Change the value to 1, like so:

      /usr/local/nagios/etc/nagios.cfg

      enable_environment_macros=1
      

      Save and close the file.

      Test the validity of the Nagios configuration by running the following command:

      • sudo /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

      The end of the output will look like:

      Output

      Total Warnings: 0 Total Errors: 0 Things look okay - No serious problems were detected during the pre-flight check

      Proceed to restart Nagios by running the following command:

      • sudo systemctl restart nagios

      To test the Slack integration, you'll send out a custom notification via the web interface. Reload the Nagios Services status page in your browser. Press on the PostgreSQL Backends service and press on Send custom service notification on the right when the page loads.

      Nagios - Custom Service Notification

      Type in a comment of your choice and press on Commit, and then press on Done. You'll immediately receive a new message in Slack.

      Slack - Status Alert From Nagios

      You have now integrated Slack with Nagios, so you'll receive messages about critical events and status changes immediately. You've also tested the integration by manually triggering an event from within Nagios.

      Conclusion

      You now have Nagios Core configured to watch over your managed PostgreSQL database and report any status changes and events to Slack, so you'll always be in the loop of what is happening to your database. This will allow you to swiftly react in case of an emergency, because you'll be getting the status feed in real time.

      If you'd like to learn more about the features of check_postgres, check out its docs, where you'll find a lot more commands that you can possibly use.

      For more information about what you can do with your PostgreSQL Managed Database, visit the product docs.



      Source link

      How To Migrate a MySQL Database to PostgreSQL Using pgLoader


      Introduction

      PostgreSQL, also known as “Postgres,” is an open-source relational database management system (RDBMS). It has seen a drastic growth in popularity in recent years, with many developers and companies migrating their data to Postgres from other database solutions.

      The prospect of migrating a database can be intimidating, especially when migrating from one database management system to another. pgLoader is an open-source database migration tool that aims to simplify the process of migrating to PostgreSQL. It supports migrations from several file types and RBDMSs — including MySQL and SQLite — to PostgreSQL.

      This tutorial provides instructions on how to install pgLoader and use it to migrate a remote MySQL database to PostgreSQL over an SSL connection. Near the end of the tutorial, we will also briefly touch on a few different migration scenarios where pgLoader may be useful.

      Prerequisites

      To complete this tutorial, you’ll need the following:

      • Access to two servers, each running Ubuntu 18.04. Both servers should have a firewall and a non-root user with sudo privileges configured. To set these up, you can follow our Initial Server Setup guide for Ubuntu 18.04.
      • MySQL installed on one of the servers. To set this up, follow Steps 1, 2, and 3 of our guide on How To Install MySQL on Ubuntu 18.04. Please note that in order to complete all the prerequisite tutorials linked here, you will need to configure your root MySQL user to authenticate with a password, as described in Step 3 of the MySQL installation guide.
      • PostgreSQL installed on the other server. To set this up, complete Step 1 of our guide How To Install and Use PostgreSQL on Ubuntu 18.04.
      • Your MySQL server should also be configured to accept encrypted connections. To set this up, complete every step of our tutorial on How To Configure SSL/TLS for MySQL on Ubuntu 18.04, including the optional Step 6. As you follow this guide, be sure to use your PostgreSQL server as the MySQL client machine, as you will need to be able to connect to your MySQL server from your Postgres machine in order to migrate the data with pgLoader.

      Please note that throughout this guide, the server on which you installed MySQL will be referred to as the “MySQL server” and any commands that should be run on this machine will be shown with a blue background, like this:

      Similarly, this guide will refer to the other server as the "PostgreSQL" or "Postgres" server and any commands that must be run on that machine will be shown with a red background:

      Please keep these in mind as you follow this tutorial so as to avoid any confusion.

      Step 1 — (Optional) Creating a Sample Database and Table in MySQL

      This step describes the process of creating a test database and populating it with dummy data. We encourage you to practice using pgLoader with this test case, but if you already have a database you want to migrate, you can move on to the next step.

      Start by opening up the MySQL prompt on your MySQL server:

      After entering your root MySQL user's password, you will see the MySQL prompt.

      From there, create a new database by running the following command. You can name your database whatever you'd like, but in this guide we will name it source_db:

      • CREATE DATABASE source_db;

      Then switch to this database with the USE command:

      Output

      Database changed

      Within this database, use the following command to create a sample table. Here, we will name this table sample_table but feel free to give it another name:

      • CREATE TABLE sample_table (
      • employee_id INT PRIMARY KEY,
      • first_name VARCHAR(50),
      • last_name VARCHAR(50),
      • start_date DATE,
      • salary VARCHAR(50)
      • );

      Then populate this table with some sample employee data using the following command:

      • INSERT INTO sample_table (employee_id, first_name, last_name, start_date, salary)
      • VALUES (1, 'Elizabeth', 'Cotten', '2007-11-11', '$105433.18'),
      • (2, 'Yanka', 'Dyagileva', '2017-10-30', '$107540.67'),
      • (3, 'Lee', 'Dorsey', '2013-06-04', '$118024.04'),
      • (4, 'Kasey', 'Chambers', '2010-08-18', '$116456.98'),
      • (5, 'Bram', 'Tchaikovsky', '2018-09-16', '$61989.50');

      Following this, you can close the MySQL prompt:

      Now that you have a sample database loaded with dummy data, you can move on to the next step in which you will install pgLoader on your PostgreSQL server.

      Step 2 — Installing pgLoader

      pgLoader is a program that can load data into a PostgreSQL database from a variety of different sources. It uses PostgreSQL's COPY command to copy data from a source database or file — such as a comma-separated values (CSV) file — into a target PostgreSQL database.

      pgLoader is available from the default Ubuntu APT repositories and you can install it using the apt command. However, in this guide we will take advantage of pgLoader's useSSL option, a feature that allows for migrations from MySQL over an SSL connection. This feature is only available in the latest version of pgLoader which, as of this writing, can only be installed using the source code from its GitHub repository.

      Before installing pgLoader, you will need to install its dependencies. If you haven't done so recently, update your Postgres server's package index:

      Then install the following packages:

      • sbcl: A Common Lisp compiler
      • unzip: A de-archiver for .zip files
      • libsqlite3-dev: A collection of development files for SQLite 3
      • gawk: Short for "GNU awk", a pattern scanning and processing language
      • curl: A command line tool for transferring data from a URL
      • make: A utility for managing package compilation
      • freetds-dev: A client library for MS SQL and Sybase databases
      • libzip-dev: A library for reading, creating, and modifying zip archives

      Use the following command to install these dependencies:

      • sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev libzip-dev

      When prompted, confirm that you want to install these packages by pressing ENTER.

      Next, navigate to the pgLoader GitHub project's Releases page and find the latest release. For this guide, we will use the latest release at the time of this writing: version 3.6.1. Scroll down to its Assets menu and copy the link for the tar.gz file labeled Source code. Then paste the link into the following wget command. This will download the tarball to your server:

      • wget https://github.com/dimitri/pgloader/archive/v3.6.1.tar.gz

      Extract the tarball:

      This will create a number of new directories and files on your server. Navigate into the new pgLoader parent directory:

      Then use the make utility to compile the pgloader binary:

      This command will take some time to build the pgloader binary.

      Move the binary file into the /usr/local/bin directory, the location where Ubuntu looks for executable files:

      • sudo mv ./build/bin/pgloader /usr/local/bin/

      You can test that pgLoader was installed correctly by checking its version, like so:

      Output

      pgloader version "3.6.1" compiled with SBCL 1.4.5.debian

      pgLoader is now installed, but before you can begin your migration you'll need to make some configuration changes to both your PostgreSQL and MySQL instances. We'll focus on the PostgreSQL server first.

      Step 3 — Creating a PostgreSQL Role and Database

      The pgloader command works by copying source data, either from a file or directly from a database, and inserting it into a PostgreSQL database. For this reason, you must either run pgLoader as a Linux user who has access to your Postgres database or you must specify a PostgreSQL role with the appropriate permissions in your load command.

      PostgreSQL manages database access through the use of roles. Depending on how the role is configured, it can be thought of as either a database user or a group of database users. In most RDBMSs, you create a user with the CREATE USER SQL command. Postgres, however, comes installed with a handy script called createuser. This script serves as a wrapper for the CREATE USER SQL command that you can run directly from the command line.

      Note: In PostgreSQL, you authenticate as a database user using the Identification Protocol, or ident, authentication method by default, rather than with a password. This involves PostgreSQL taking the client's Ubuntu username and using it as the allowed Postgres database username. This allows for greater security in many cases, but it can also cause issues in instances where you'd like an outside program to connect to one of your databases.

      pgLoader can load data into a Postgres database through a role that authenticates with the ident method as long as that role shares the same name as the Linux user profile issuing the pgloader command. However, to keep this process as clear as possible, this tutorial describes setting up a different PostgreSQL role that authenticates with a password rather than with the ident method.

      Run the following command on your Postgres server to create a new role. Note the -P flag, which tells createuser to prompt you to enter a password for the new role:

      • sudo -u postgres createuser --interactive -P

      You may first be prompted for your sudo password. The script will then prompt you to enter a name for the new role. In this guide, we'll call this role pgloader_pg:

      Output

      Enter name of role to add: pgloader_pg

      Following that, createuser will prompt you to enter and confirm a password for this role. Be sure to take note of this password, as you'll need it to perform the migration in Step 5:

      Output

      Enter password for new role: Enter it again:

      Lastly, the script will ask you if the new role should be classified as a superuser. In PostgreSQL, connecting to the database with a superuser role allows you to circumvent all of the database's permissions checks, except for the right to log in. Because of this, the superuser privilege should not be used lightly, and the PostgreSQL documentation recommends that you do most of your database work as a non-superuser role. However, because pgLoader needs broad privileges to access and load data into tables, you can safely grant this new role superuser privileges. Do so by typing y and then pressing ENTER:

      Output

      . . . Shall the new role be a superuser? (y/n) y

      PostgreSQL comes with another useful script that allows you to create a database from the command line. Since pgLoader also needs a target database into which it can load the source data, run the following command to create one. We'll name this database new_db but feel free to modify that if you like:

      • sudo -u postgres createdb new_db

      If there aren't any errors, this command will complete without any output.

      Now that you have a dedicated PostgreSQL user and an empty database into which you can load your MySQL data, there are just a few more changes you'll need to make before performing a migration. You'll need to create a dedicated MySQL user with access to your source database and add your client-side certificates to Ubuntu's trusted certificate store.

      Step 4 — Creating a Dedicated User in MySQL and Managing Certificates

      Protecting data from snoopers is one of the most important parts of any database administrator's job. Migrating data from one machine to another opens up an opportunity for malicious actors to sniff the packets traveling over the network connection if it isn't encrypted. In this step, you will create a dedicated MySQL user which pgLoader will use to perform the migration over an SSL connection.

      Begin by opening up your MySQL prompt:

      From the MySQL prompt, use the following CREATE USER command to create a new MySQL user. We will name this user pgloader_my. Because this user will only access MySQL from your PostgreSQL server, be sure to replace your_postgres_server_ip with the public IP address of your PostgreSQL server. Additionally, replace password with a secure password or passphrase:

      • CREATE USER 'pgloader_my'@'your_postgres_server_ip' IDENTIFIED BY 'password' REQUIRE SSL;

      Note the REQUIRE SSL clause at the end of this command. This will restrict the pgloader_my user to only access the database through a secure SSL connection.

      Next, grant the pgloader_my user access to the target database and all of its tables. Here, we'll specify the database we created in the optional Step 1, but if you have your own database you'd like to migrate, use its name in place of source_db:

      • GRANT ALL ON source_db.* TO 'pgloader_my'@'your_postgresql_server_ip';

      Then run the FLUSH PRIVILEGES command to reload the grant tables, enabling the privilege changes:

      After this, you can close the MySQL prompt:

      Now go back to your Postgres server terminal and attempt to log in to the MySQL server as the new pgloader_my user. If you followed the prerequisite guide on configuring SSL/TLS for MySQL then you will already have mysql-client installed on your PostgreSQL server and you should be able to connect with the following command:

      • mysql -u pgloader_my -p -h your_mysql_server_ip

      If the command is successful, you will see the MySQL prompt:

      After confirming that your pgloader_my user can successfully connect, go ahead and close the prompt:

      At this point, you have a dedicated MySQL user that can access the source database from your Postgres machine. However, if you were to try to migrate your MySQL database using SSL the attempt would fail.

      The reason for this is that pgLoader isn't able to read MySQL's configuration files, and thus doesn't know where to look for the CA certificate or client certificate that you copied to your PostgreSQL server in the prerequisite SSL/TLS configuration guide. Rather than ignoring SSL requirements, though, pgLoader requires the use of trusted certificates in cases where SSL is needed to connect to MySQL. Accordingly, you can resolve this issue by adding the ca.pem and client-cert.pem files to Ubuntu's trusted certificate store.

      To do this, copy over the ca.pem and client-cert.pem files to the /usr/local/share/ca-certificates/ directory. Note that you must also rename these files so they have the .crt file extension. If you don't rename them, your system will not be able to recognize that you've added these new certificates:

      • sudo cp ~/client-ssl/ca.pem /usr/local/share/ca-certificates/ca.pem.crt
      • sudo cp ~/client-ssl/client-cert.pem /usr/local/share/ca-certificates/client-cert.pem.crt

      Following this, run the update-ca-certificates command. This program looks for certificates within /usr/local/share/ca-certificates, adds any new ones to the /etc/ssl/certs/ directory, and generates a list of trusted SSL certificates — ca-certificates.crt — based on the contents of the /etc/ssl/certs/ directory:

      • sudo update-ca-certificates

      Output

      Updating certificates in /etc/ssl/certs... 2 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... done.

      With that, you're all set to migrate your MySQL database to PostgreSQL.

      Step 5 — Migrating the Data

      Now that you've configured remote access from your PostgreSQL server to your MySQL server, you're ready to begin the migration.

      Note: It's important to back up your database before taking any action that could impact the integrity of your data. However, this isn't necessary when performing a migration with pgLoader, since it doesn't delete or transform data; it only copies it.

      That said, if you're feeling cautious and would like to back up your data before migrating it, you can do so with the mysqldump utility. See the official MySQL documentation for details.

      pgLoader allows users to migrate an entire database with a single command. For a migration from a MySQL database to a PostgreSQL database on a separate server, the command would have the following syntax:

      • pgloader mysql://mysql_username:password@mysql_server_ip_/source_database_name?option_1=value&option_n=value postgresql://postgresql_role_name:password@postgresql_server_ip/target_database_name?option_1=value&option_n=value

      This includes the pgloader command and two connection strings, the first for the source database and the second for the target database. Both of these connection strings begin by declaring what type of DBMS the connection string points to, followed by the username and password that have access to the database (separated by a colon), the host address of the server where the database is installed, the name of the database pgLoader should target, and various options that affect pgLoader's behavior.

      Using the parameters defined earlier in this tutorial, you can migrate your MySQL database using a command with the following structure. Be sure to replace any highlighted values to align with your own setup:

      • pgloader mysql://pgloader_my:mysql_password@mysql_server_ip/source_db?useSSL=true postgresql://pgloader_pg:postgresql_password@localhost/new_db

      Note that this command includes the useSSL option in the MySQL connection string. By setting this option to true, pgLoader will connect to MySQL over SSL. This is necessary, as you've configured your MySQL server to only accept secure connections.

      If this command is successful, you will see an output table describing how the migration went:

      Output

      table name errors rows bytes total time ----------------------- --------- --------- --------- -------------- fetch meta data 0 2 0.111s Create Schemas 0 0 0.001s Create SQL Types 0 0 0.005s Create tables 0 2 0.017s Set Table OIDs 0 1 0.010s ----------------------- --------- --------- --------- -------------- source_db.sample_table 0 5 0.2 kB 0.048s ----------------------- --------- --------- --------- -------------- COPY Threads Completion 0 4 0.052s Index Build Completion 0 1 0.011s Create Indexes 0 1 0.006s Reset Sequences 0 0 0.014s Primary Keys 0 1 0.001s Create Foreign Keys 0 0 0.000s Create Triggers 0 0 0.000s Install Comments 0 0 0.000s ----------------------- --------- --------- --------- -------------- Total import time ✓ 5 0.2 kB 0.084s

      To check that the data was migrated correctly, open up the PostgreSQL prompt:

      From there, connect to the database into which you loaded the data:

      Then run the following query to test whether the migrated data is stored in your PostgreSQL database:

      • SELECT * FROM source_db.sample_table;

      Note: Notice the FROM clause in this query specifying the sample_table held within the source_db schema:

      • . . . FROM source_db.sample_table;

      This is called a qualified name. You could go further and specify the fully qualified name by including the database's name as well as those of the schema and table:

      • . . . FROM new_db.source_db.sample_table;

      When you run queries in a PostgreSQL database, you don't need to be this specific if the table is held within the default public schema. The reason you must do so here is that when pgLoader loads data into Postgres, it creates and targets a new schema named after the original database — in this case, source_db. This is pgLoader's default behavior for MySQL to PostgreSQL migrations. However, you can use a load file to instruct pgLoader to change the table's schema topubliconce it's done loading data. See the next step for an example of how to do this.

      If the data was indeed loaded correctly, you will see the following table in the query's output:

      Output

      employee_id | first_name | last_name | start_date | salary -------------+------------+-------------+------------+------------ 1 | Elizabeth | Cotten | 2007-11-11 | $105433.18 2 | Yanka | Dyagileva | 2017-10-30 | $107540.67 3 | Lee | Dorsey | 2013-06-04 | $118024.04 4 | Kasey | Chambers | 2010-08-18 | $116456.98 5 | Bram | Tchaikovsky | 2018-09-16 | $61989.50 (5 rows)

      To close the Postgres prompt, run the following command:

      Now that we've gone over how to migrate a MySQL database over a network and load it into a PostgreSQL database, we will go over a few other common migration scenarios in which pgLoader can be useful.

      Step 6 — Exploring Other Migration Options

      pgLoader is a highly flexible tool that can be useful in a wide variety of situations. Here, we'll take a quick look at a few other ways you can use pgLoader to migrate a MySQL database to PostgreSQL.

      Migrating with a pgLoader Load File

      In the context of pgLoader, a load file, or command file, is a file that tells pgLoader how to perform a migration. This file can include commands and options that affect pgLoader's behavior, giving you much finer control over how your data is loaded into PostgreSQL and allowing you to perform complex migrations.

      pgLoader's documentation provides comprehensive instructions on how to use and extend these files to support a number of migration types, so here we will work through a comparatively rudimentary example. We will perform the same migration we ran in Step 5, but will also include an ALTER SCHEMA command to change the new_db database's schema from source_db to public.

      To begin, create a new load file on the Postgres server using your preferred text editor:

      Then add the following content, making sure to update the highlighted values to align with your own configuration:

      pgload_test.load

      LOAD DATABASE
           FROM      mysql://pgloader_my:mysql_password@mysql_server_ip/source_db?useSSL=true
           INTO pgsql://pgloader_pg:postgresql_password@localhost/new_db
      
       WITH include drop, create tables
      
      ALTER SCHEMA 'source_db' RENAME TO 'public'
      ;
      

      Here is what each of these clauses do:

      • LOAD DATABASE: This line instructs pgLoader to load data from a separate database, rather than a file or data archive.
      • FROM: This clause specifies the source database. In this case, it points to the connection string for the MySQL database we created in Step 1.
      • INTO: Likewise, this line specifies the PostgreSQL database in to which pgLoader should load the data.
      • WITH: This clause allows you to define specific behaviors for pgLoader. You can find the full list of WITH options that are compatible with MySQL migrations here. In this example we only include two options:
        • include drop: When this option is used, pgLoader will drop any tables in the target PostgreSQL database that also appear in the source MySQL database. If you use this option when migrating data to an existing PostgreSQL database, you should back up the entire database to avoid losing any data.
        • create tables: This option tells pgLoader to create new tables in the target PostgreSQL database based on the metadata held in the MySQL database. If the opposite option, create no tables, is used, then the target tables must already exist in the target Postgres database prior to the migration.
      • ALTER SCHEMA: Following the WITH clause, you can add specific SQL commands like this to instruct pgLoader to perform additional actions. Here, we instruct pgLoader to change the new Postgres database's schema from source_db to public, but only after it has created the schema. Note that you can also nest such commands within other clauses — such as BEFORE LOAD DO — to instruct pgLoader to execute those commands at specific points in the migration process.

      This is a demonstrative example of what you can include in a load file to modify pgLoader's behavior. The complete list of clauses that one can add to a load file and what they do can be found in the official pgLoader documentation.

      Save and close the load file after you've finished adding this content. To use it, include the name of the file as an argument to the pgloader command:

      • pgloader pgload_test.load

      To test that the migration was successful, open up the Postgres prompt:

      Then connect to the database:

      And run the following query:

      • SELECT * FROM sample_table;

      Output

      employee_id | first_name | last_name | start_date | salary -------------+------------+-------------+------------+------------ 1 | Elizabeth | Cotten | 2007-11-11 | $105433.18 2 | Yanka | Dyagileva | 2017-10-30 | $107540.67 3 | Lee | Dorsey | 2013-06-04 | $118024.04 4 | Kasey | Chambers | 2010-08-18 | $116456.98 5 | Bram | Tchaikovsky | 2018-09-16 | $61989.50 (5 rows)

      This output confirms that pgLoader migrated the data successfully, and also that the ALTER SCHEMA command we added to the load file worked as expected, since we didn't need to specify the source_db schema in the query to view the data.

      Note that if you plan to use a load file to migrate data held on one database to another located on a separate machine, you will still need to adjust any relevant networking and firewall rules in order for the migration to be successful.

      Migrating a MySQL Database to PostgreSQL Locally

      You can use pgLoader to migrate a MySQL database to a PostgreSQL database housed on the same machine. All you need is to run the migration command from a Linux user profile with access to the root MySQL user:

      • pgloader mysql://root@localhost/source_db pgsql://sammy:postgresql_password@localhost/target_db

      Performing a local migration like this means you don't have to make any changes to MySQL's default networking configuration or your system's firewall rules.

      Migrating from a CSV file

      You can also load a PostgreSQL database with data from a CSV file.

      Assuming you have a CSV file of data named load.csv, the command to load it into a Postgres database might look like this:

      • pgloader load.csv pgsql://sammy:password@localhost/target_db

      Because the CSV format is not fully standardized, there's a chance that you will run into issues when loading data directly from a CSV file in this manner. Fortunately, you can correct for irregularities by including various options with pgLoader's command line options or by specifying them in a load file. See the pgLoader documentation on the subject for more details.

      Migrating to a Managed PostgreSQL Database

      It's also possible to perform a migration from a self-managed database to a managed PostgreSQL database. To illustrate how this kind of migration could look, we will use the MySQL server and a DigitalOcean Managed PostgreSQL Database. We'll also use the sample database we created in Step 1, but if you skipped that step and have your own database you'd like to migrate, you can point to that one instead.

      Note: For instructions on how to set up a DigitalOcean Managed Database, please refer to our Managed Database Quickstart guide.

      For this migration, we won't need pgLoader’s useSSL option since it only works with remote MySQL databases and we will run this migration from a local MySQL database. However, we will use the sslmode=require option when we load and connect to the DigitalOcean Managed PostgreSQL database, which will ensure your data stays protected.

      Because we're not using the useSSL this time around, you can use apt to install pgLoader along with the postgresql-client package, which will allow you to access the Managed PostgreSQL Database from your MySQL server:

      • sudo apt install pgloader postgresql-client

      Following that, you can run the pgloader command to migrate the database. To do this, you'll need the connection string for the Managed Database.

      For DigitalOcean Managed Databases, you can copy the connection string from the Cloud Control Panel. First, click Databases in the left-hand sidebar menu and select the database to which you want to migrate the data. Then scroll down to the Connection Details section. Click on the drop down menu and select Connection string. Then, click the Copy button to copy the string to your clipboard and paste it into the following migration command, replacing the example PostgreSQL connection string shown here. This will migrate your MySQL database into the defaultdb PostgreSQL database as the doadmin PostgreSQL role:

      • pgloader mysql://root:password@localhost/source_db postgres://doadmin:password@db_host/defaultdb?sslmode=require

      Following this, you can use the same connection string as an argument to psql to connect to the managed PostgreSQL database andhttps://www.digitalocean.com/community/tutorials/how-to-migrate-mysql-database-to-postgres-using-pgloader#step-1-%E2%80%94-(optional)-creating-a-sample-database-and-table-in-mysql confirm that the migration was successful:

      • psql postgres://doadmin:password@db_host/defaultdb?sslmode=require

      Then, run the following query to check that pgLoader correctly migrated the data:

      • SELECT * FROM source_db.sample_table;

      Output

      employee_id | first_name | last_name | start_date | salary -------------+------------+-------------+------------+------------ 1 | Elizabeth | Cotten | 2007-11-11 | $105433.18 2 | Yanka | Dyagileva | 2017-10-30 | $107540.67 3 | Lee | Dorsey | 2013-06-04 | $118024.04 4 | Kasey | Chambers | 2010-08-18 | $116456.98 5 | Bram | Tchaikovsky | 2018-09-16 | $61989.50 (5 rows)

      This confirms that pgLoader successfully migrated your MySQL database to your managed PostgreSQL instance.

      Conclusion

      pgLoader is a flexible tool that can perform a database migration in a single command. With a few configuration tweaks, it can migrate an entire database from one physical machine to another using a secure SSL/TLS connection. Our hope is that by following this tutorial, you will have gained a clearer understanding of pgLoader's capabilities and potential use cases.

      After migrating your data over to PostgreSQL, you may find the following tutorials to be of interest:



      Source link