One place for hosting & domains

      PostgreSQL

      Managed Databases Connection Pools and PostgreSQL Benchmarking Using pgbench


      Introduction

      DigitalOcean Managed Databases allows you to scale your PostgreSQL database using several methods. One such method is a built-in connection pooler that allows you to efficiently handle large numbers of client connections and reduce the CPU and memory footprint of these open connections. By using a connection pool and sharing a fixed set of recyclable connections, you can handle significantly more concurrent client connections, and squeeze extra performance out of your PostgreSQL database.

      In this tutorial we’ll use pgbench, PostgreSQL’s built-in benchmarking tool, to run load tests on a DigitalOcean Managed PostgreSQL Database. We’ll dive in to connection pools, describe how they work, and show how to create one using the Cloud Control panel. Finally, using results from the pgbench tests, we’ll demonstrate how using a connection pool can be an inexpensive method of increasing database throughput.

      Prerequisites

      To complete this tutorial, you’ll need:

      • A DigitalOcean Managed PostgreSQL Database cluster. To learn how to provision and configure a DigitalOcean PostgreSQL cluster, consult the Managed Database product documentation.
      • A client machine with PostgreSQL installed. By default, your PostgreSQL installation will contain the pgbench benchmarking utility and the psql client, both of which we’ll use in this guide. Consult How To Install and Use PostgreSQL on Ubuntu 18.04 to learn how to Install PostgreSQL. If you’re not running Ubuntu on your client machine, you can use the version finder to find the appropriate tutorial.

      Once you have a DigitalOcean PostgreSQL cluster up and running and a client machine with pgbench installed, you’re ready to begin with this guide.

      Step 1 — Creating and Initializing benchmark Database

      Before we create a connection pool for our database, we’ll first create the benchmark database on our PostgreSQL cluster and populate it with some dummy data on which pgbench will run its tests. The pgbench utility repeatedly runs a series of five SQL commands (consisting of SELECT, UPDATE, and INSERT queries) in a transaction, using multiple threads and clients, and calculates a useful performance metric called Transactions per Second (TPS). TPS is a measure of database throughput, counting the number of atomic transactions processed by the database in one second. To learn more about the specific commands executed by pgbench, consult What is the “Transaction” Actually Performed in pgbench? from the official pgbench documentation.

      Let’s begin by connecting to our PostgreSQL cluster and creating the benchmark database.

      First, retrieve your cluster’s Connection Details by navigating to Databases and locating your PostgreSQL cluster. Click into your cluster. You should see a cluster overview page containing the following Connection Details box:

      PostgreSQL Cluster Connection Details

      From this, we can parse the following config variables:

      • Admin user: doadmin
      • Admin password: your_password
      • Cluster endpoint: dbaas-test-do-user-3587522-0.db.ondigitalocean.com
      • Connection port: 25060
      • Database to connect to: defaultdb
      • SSL Mode: require (use an SSL-encrypted connection for increased security)

      Take note of these parameters, as you’ll need them when using both the psql client and pgbench tool.

      Click on the dropdown above this box and select Connection String. We’ll copy this string and pass it in to psql to connect to this PostgreSQL node.

      Connect to your cluster using psql and the connection string you just copied:

      • psql postgresql://doadmin:your_password@your_cluster_endpoint:25060/defaultdb?sslmode=require

      You should see the following PostgreSQL client prompt, indicating that you’ve connected to your PostgreSQL cluster successfully:

      Output

      psql (10.6 (Ubuntu 10.6-0ubuntu0.18.04.1)) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type "help" for help. defaultdb=>

      From here, create the benchmark database:

      • CREATE DATABASE benchmark;

      You should see the following output:

      Output

      CREATE DATABASE

      Now, disconnect from the cluster:

      Before we run the pgbench tests, we need to populate this benchmark database with some tables and dummy data required to run the tests.

      To do this, we’ll run pgbench with the following flags:

      • -h: The PostgreSQL cluster endpoint
      • -p: The PostgreSQL cluster connection port
      • -U: The database username
      • -i: Indicates that we'd like to initialize the benchmark database with benchmarking tables and their dummy data.
      • -s : Set a scale factor of 150, which will multiply table sizes by 150. The default scale factor of 1 results in tables of the following sizes:

        table                   # of rows
        ---------------------------------
        pgbench_branches        1
        pgbench_tellers         10
        pgbench_accounts        100000
        pgbench_history         0
        

        Using a scale factor of 150, the pgbench_accounts table will contain 15,000,000 rows.

        Note: To avoid excessive blocked transactions, be sure to set the scale factor to a value at least as large as the number of concurrent clients you intend to test with. In this tutorial we'll test with 150 clients at most, so we set -s to 150 here. To learn more, consult these recommended practices from the official pgbench documentation.

      Run the complete pgbench command:

      • pgbench -h your_cluster_endpoint -p 25060 -U doadmin -i -s 150 benchmark

      After running this command, you will be prompted to enter the password for the database user you specified. Enter the password, and hit ENTER.

      You should see the following output:

      Output

      dropping old tables... NOTICE: table "pgbench_accounts" does not exist, skipping NOTICE: table "pgbench_branches" does not exist, skipping NOTICE: table "pgbench_history" does not exist, skipping NOTICE: table "pgbench_tellers" does not exist, skipping creating tables... generating data... 100000 of 15000000 tuples (0%) done (elapsed 0.19 s, remaining 27.93 s) 200000 of 15000000 tuples (1%) done (elapsed 0.85 s, remaining 62.62 s) 300000 of 15000000 tuples (2%) done (elapsed 1.21 s, remaining 59.23 s) 400000 of 15000000 tuples (2%) done (elapsed 1.63 s, remaining 59.44 s) 500000 of 15000000 tuples (3%) done (elapsed 2.05 s, remaining 59.51 s) . . . 14700000 of 15000000 tuples (98%) done (elapsed 70.87 s, remaining 1.45 s) 14800000 of 15000000 tuples (98%) done (elapsed 71.39 s, remaining 0.96 s) 14900000 of 15000000 tuples (99%) done (elapsed 71.91 s, remaining 0.48 s) 15000000 of 15000000 tuples (100%) done (elapsed 72.42 s, remaining 0.00 s) vacuuming... creating primary keys... done.

      At this point, we've created a benchmarking database, populated with the tables and data required to run the pgbench tests. We can now move on to running a baseline test which we'll use to compare performance before and after connection pooling is enabled.

      Step 2 — Running a Baseline pgbench Test

      Before we run our first benchmark, it's worth diving into what we're trying to optimize with connection pools.

      Typically when a client connects to a PostgreSQL database, the main PostgreSQL OS process forks itself into a child process corresponding to this new connection. When there are only a few connections, this rarely presents an issue. However, as clients and connections scale, the CPU and memory overhead of creating and maintaining these connections begins to add up, especially if the application in question does not efficiently use database connections. In addition, the max_connections PostgreSQL setting may limit the number of client connections allowed, resulting in additional connections being refused or dropped.

      A connection pool keeps open a fixed number of database connections, the pool size, which it then uses to distribute and execute client requests. This means that you can accommodate far more simultaneous connections, efficiently deal with idle or stagnant clients, as well as queue up client requests during traffic spikes instead of rejecting them. By recycling connections, you can more efficiently use your machine's resources in an environment where there is a heavy connection volume, and squeeze extra performance out of your database.

      A connection pool can be implemented either on the application side or as middleware between the database and your application. The Managed Databases connection pooler is built on top of pgBouncer, a lightweight, open-source middleware connection pooler for PostgreSQL. Its interface is available via the Cloud Control Panel UI.

      Navigate to Databases in the Control Panel, and then click into your PostgreSQL cluster. From here, click into Connection Pools. Then, click on Create a Connection Pool. You should see the following configuration window:

      Connection Pools Config Window

      Here, you can configure the following fields:

      • Pool Name: A unique name for your connection pool
      • Database: The database for which you'd like to pool connections
      • User: The PostgreSQL user the connection pool will authenticate as
      • Mode: One of Session, Transaction, or Statement. This option controls how long the pool assigns a backend connection to a client.
        • Session: The client holds on to the connection until it explicitly disconnects.
        • Transaction: The client obtains the connection until it completes a transaction, after which the connection is returned to the pool.
        • Statement: The pool aggressively recycles connections after each client statement. In statement mode, multi-statement transactions are not allowed. To learn more, consult the Connection Pools product documentation.
      • Pool Size: The number of connections the connection pool will keep open between itself and the database.

      Before we create a connection pool, we'll run a baseline test to which we can compare database performance with connection pooling.

      In this tutorial, we'll use a 4 GB RAM, 2 vCPU, 80 GB Disk, primary node only Managed Database setup. You can scale the benchmark test parameters in this section according to your PostgreSQL cluster specs.

      DigitalOcean Managed Database clusters have the PostgreSQL max_connections parameter preset to 25 connections per 1 GB RAM. A 4 GB RAM PostgreSQL node therefore has max_connections set to 100. In addition, for all clusters, 3 connections are reserved for maintenance. So for this 4 GB RAM PostgreSQL cluster, 97 connections are available for connection pooling.

      With this in mind, let's run our first baseline pgbench test.

      Log in to your client machine. We’ll run pgbench, specifying the database endpoint, port and user as usual. In addition, we’ll provide the following flags:

      • -c: The number of concurrent clients or database sessions to simulate. We set this to 50 so as to simulate a number of concurrent connections smaller than the max_connections parameter for our PostgreSQL cluster.
      • -j: The number of worker threads pgbench will use to run the benchmark. If you're using a multi-CPU machine, you can tune this upwards to distribute clients across threads. On a two-core machine, we set this to 2.
      • -P: Display progress and metrics every 60 seconds.
      • -T: Run the benchmark for 600 seconds (10 minutes). To produce consistent, reproducible results, it's important that you run the benchmark for several minutes, or through one checkpoint cycle.

      We’ll also specify that we'd like to run the benchmark against the benchmark database we created and populated earlier.

      Run the following complete pgbench command:

      • pgbench -h your_db_endpoint -p 25060 -U doadmin -c 50 -j 2 -P 60 -T 600 benchmark

      Hit ENTER and then type in the password for the doadmin user to begin running the test. You should see output similar to the following (results will depend on the specs of your PostgreSQL cluster):

      Output

      starting vacuum...end. progress: 60.0 s, 157.4 tps, lat 282.988 ms stddev 40.261 progress: 120.0 s, 176.2 tps, lat 283.726 ms stddev 38.722 progress: 180.0 s, 167.4 tps, lat 298.663 ms stddev 238.124 progress: 240.0 s, 178.9 tps, lat 279.564 ms stddev 43.619 progress: 300.0 s, 178.5 tps, lat 280.016 ms stddev 43.235 progress: 360.0 s, 178.8 tps, lat 279.737 ms stddev 43.307 progress: 420.0 s, 179.3 tps, lat 278.837 ms stddev 43.783 progress: 480.0 s, 178.5 tps, lat 280.203 ms stddev 43.921 progress: 540.0 s, 180.0 tps, lat 277.816 ms stddev 43.742 progress: 600.0 s, 178.5 tps, lat 280.044 ms stddev 43.705 transaction type: <builtin: TPC-B (sort of)> scaling factor: 150 query mode: simple number of clients: 50 number of threads: 2 duration: 600 s number of transactions actually processed: 105256 latency average = 282.039 ms latency stddev = 84.244 ms tps = 175.329321 (including connections establishing) tps = 175.404174 (excluding connections establishing)

      Here, we observed that over a 10 minute run with 50 concurrent sessions, we processed 105,256 transactions with a throughput of roughly 175 transactions per second.

      Now, let's run the same test, this time using 150 concurrent clients, a value that is higher than max_connections for this database, to synthetically simulate a mass influx of client connections:

      • pgbench -h your_db_endpoint -p 25060 -U doadmin -c 150 -j 2 -P 60 -T 600 benchmark

      You should see output similar to the following:

      Output

      starting vacuum...end. connection to database "pgbench" failed: FATAL: remaining connection slots are reserved for non-replication superuser connections progress: 60.0 s, 182.6 tps, lat 280.069 ms stddev 42.009 progress: 120.0 s, 253.8 tps, lat 295.612 ms stddev 237.448 progress: 180.0 s, 271.3 tps, lat 276.411 ms stddev 40.643 progress: 240.0 s, 273.0 tps, lat 274.653 ms stddev 40.942 progress: 300.0 s, 272.8 tps, lat 274.977 ms stddev 41.660 progress: 360.0 s, 250.0 tps, lat 300.033 ms stddev 282.712 progress: 420.0 s, 272.1 tps, lat 275.614 ms stddev 42.901 progress: 480.0 s, 261.1 tps, lat 287.226 ms stddev 112.499 progress: 540.0 s, 272.5 tps, lat 275.309 ms stddev 41.740 progress: 600.0 s, 271.2 tps, lat 276.585 ms stddev 41.221 transaction type: <builtin: TPC-B (sort of)> scaling factor: 150 query mode: simple number of clients: 150 number of threads: 2 duration: 600 s number of transactions actually processed: 154892 latency average = 281.421 ms latency stddev = 125.929 ms tps = 257.994941 (including connections establishing) tps = 258.049251 (excluding connections establishing)

      Note the FATAL error, indicating that pgbench hit the 100 connection limit threshold set by max_connections, resulting in a refused connection. The test was still able to complete, with a TPS of roughly 257.

      At this point we can investigate how a connection pool could potentially improve our database's throughput.

      Step 3 — Creating and Testing a Connection Pool

      In this step we'll create a connection pool and rerun the previous pgbench test to see if we can improve our database's throughput.

      In general, the max_connections setting and connection pool parameters are tuned in tandem to max out the database's load. However, because max_connections is abstracted away from the user in DigitalOcean Managed Databases, our main levers here are the connection pool Mode and Size settings.

      To begin, let's create a connection pool in Transaction mode that keeps open all the available backend connections.

      Navigate to Databases in the Control Panel, and then click into your PostgreSQL cluster. From here, click into Connection Pools. Then, click on Create a Connection Pool.

      In the configuration window that appears, fill in the following values:

      Connection Pool Configuration Values

      Here we name our connection pool test-pool, and use it with the benchmark database. Our database user is doadmin and we set the connection pool to Transaction mode. Recall from earlier that for a managed database cluster with 4GB of RAM, there are 97 available database connections. Accordingly, configure the pool to keep open 97 database connections.

      When you're done, hit Create Pool.

      You should now see this pool in the Control Panel:

      Connection Pool in Control Panel

      Grab its URI by clicking Connection Details. It should look something like the following

      postgres://doadmin:password@pool_endpoint:pool_port/test-pool?sslmode=require
      

      You should notice a different port here, and potentially a different endpoint and database name, corresponding to the pool name test-pool.

      Now that we've created the test-pool connection pool, we can rerun the pgbench test we ran above.

      Rerun pgbench

      From your client machine, run the following pgbench command (with 150 concurrent clients), making sure to substitute the highlighted values with those in your connection pool URI:

      • pgbench -h pool_endpoint -p pool_port -U doadmin -c 150 -j 2 -P 60 -T 600 test-pool

      Here we once again use 150 concurrent clients, run the test across 2 threads, print progress every 60 seconds, and run the test for 600 seconds. We set the database name to test-pool, the name of the connection pool.

      Once the test completes, you should see output similar to the following (note that these results will vary depending on the specs of your database node):

      Output

      starting vacuum...end. progress: 60.0 s, 240.0 tps, lat 425.251 ms stddev 59.773 progress: 120.0 s, 350.0 tps, lat 428.647 ms stddev 57.084 progress: 180.0 s, 340.3 tps, lat 440.680 ms stddev 313.631 progress: 240.0 s, 364.9 tps, lat 411.083 ms stddev 61.106 progress: 300.0 s, 366.5 tps, lat 409.367 ms stddev 60.165 progress: 360.0 s, 362.5 tps, lat 413.750 ms stddev 59.005 progress: 420.0 s, 359.5 tps, lat 417.292 ms stddev 60.395 progress: 480.0 s, 363.8 tps, lat 412.130 ms stddev 60.361 progress: 540.0 s, 351.6 tps, lat 426.661 ms stddev 62.960 progress: 600.0 s, 344.5 tps, lat 435.516 ms stddev 65.182 transaction type: <builtin: TPC-B (sort of)> scaling factor: 150 query mode: simple number of clients: 150 number of threads: 2 duration: 600 s number of transactions actually processed: 206768 latency average = 421.719 ms latency stddev = 114.676 ms tps = 344.240797 (including connections establishing) tps = 344.385646 (excluding connections establishing)

      Notice here that we were able to increase our database's throughput from 257 TPS to 344 TPS with 150 concurrent connections (an increase of 33%), and did not run up against the max_connections limit we previously hit without a connection pool. By placing a connection pool in front of the database, we can avoid dropped connections and significantly increase database throughput in an environment with a large number of simultaneous connections.

      If you run this same test, but with a -c value of 50 (specifying a smaller number of clients), the gains from using a connection pool become much less evident:

      Output

      starting vacuum...end. progress: 60.0 s, 154.0 tps, lat 290.592 ms stddev 35.530 progress: 120.0 s, 162.7 tps, lat 307.168 ms stddev 241.003 progress: 180.0 s, 172.0 tps, lat 290.678 ms stddev 36.225 progress: 240.0 s, 172.4 tps, lat 290.169 ms stddev 37.603 progress: 300.0 s, 177.8 tps, lat 281.214 ms stddev 35.365 progress: 360.0 s, 177.7 tps, lat 281.402 ms stddev 35.227 progress: 420.0 s, 174.5 tps, lat 286.404 ms stddev 34.797 progress: 480.0 s, 176.1 tps, lat 284.107 ms stddev 36.540 progress: 540.0 s, 173.1 tps, lat 288.771 ms stddev 38.059 progress: 600.0 s, 174.5 tps, lat 286.508 ms stddev 59.941 transaction type: <builtin: TPC-B (sort of)> scaling factor: 150 query mode: simple number of clients: 50 number of threads: 2 duration: 600 s number of transactions actually processed: 102938 latency average = 288.509 ms latency stddev = 83.503 ms tps = 171.482966 (including connections establishing) tps = 171.553434 (excluding connections establishing)

      Here we see that we were not able to increase throughput by using a connection pool. Our throughput went down to 171 TPS from 175 TPS.

      Although in this guide we use pgbench with its built-in benchmark data set, the best test for determining whether or not to use a connection pool is a benchmark load that accurately represents production load on your database, against production data. Creating custom benchmarking scripts and data is beyond the scope of this guide, but to learn more, consult the official pgbench documentation.

      Note: The pool size setting is highly workload-specific. In this guide, we configured the connection pool to use all the available backend database connections. This was because throughout our benchmark, the database rarely reached full utilization (you can monitor database load from the Metrics tab in the Cloud Control Panel). Depending on your database's load, this may not be the optimal setting. If you notice that your database is constantly fully saturated, shrinking the connection pool may increase throughput and improve performance by queuing additional requests instead of trying to execute them all at the same time on an already loaded server.

      Conclusion

      DigitalOcean Managed Databases connection pooling is a powerful feature that can help you quickly squeeze extra performance out of your database. Along with other techniques like replication, caching, and sharding, connection pooling can help you scale your database layer to process an even greater volume of requests.

      In this guide we focused on a simplistic and synthetic testing scenario using PostgreSQL's built-in pgbench benchmarking tool and its default benchmark test. In any production scenario, you should run benchmarks against actual production data while simulating production load. This will allow you to tune your database for your particular usage pattern.

      Along with pgbench, other tools exist to benchmark and load your database. One such tool developed by Percona is sysbench-tpcc. Another is Apache's JMeter, which can load test databases as well as web applications.

      To learn more about DigitalOcean Managed Databases, consult the Managed Databases product documentation. To learn more about sharding, another useful scaling technique, consult Understanding Database Sharding.

      References



      Source link

      An Introduction to Queries in PostgreSQL


      Introduction

      Databases are a key component of many websites and applications, and are at the core of how data is stored and exchanged across the internet. One of the most important aspects of database management is the practice of retrieving data from a database, whether it’s on an ad hoc basis or part of a process that’s been coded into an application. There are several ways to retrieve information from a database, but one of the most commonly-used methods is performed through submitting queries through the command line.

      In relational database management systems, a query is any command used to retrieve data from a table. In Structured Query Language (SQL), queries are almost always made using the SELECT statement.

      In this guide, we will discuss the basic syntax of SQL queries as well as some of the more commonly-employed functions and operators. We will also practice making SQL queries using some sample data in a PostgreSQL database.

      PostgreSQL, often shortened to “Postgres,” is a relational database management system with an object-oriented approach, meaning that information can be represented as objects or classes in PostgreSQL schemas. PostgreSQL aligns closely with standard SQL, although it also includes some features not found in other relational database systems.

      Prerequisites

      In general, the commands and concepts presented in this guide can be used on any Linux-based operating system running any SQL database software. However, it was written specifically with an Ubuntu 18.04 server running PostgreSQL in mind. To set this up, you will need the following:

      With this setup in place, we can begin the tutorial.

      Creating a Sample Database

      Before we can begin making queries in SQL, we will first create a database and a couple tables, then populate these tables with some sample data. This will allow you to gain some hands-on experience when you begin making queries later on.

      For the sample database we’ll use throughout this guide, imagine the following scenario:

      You and several of your friends all celebrate your birthdays with one another. On each occasion, the members of the group head to the local bowling alley, participate in a friendly tournament, and then everyone heads to your place where you prepare the birthday-person’s favorite meal.

      Now that this tradition has been going on for a while, you’ve decided to begin tracking the records from these tournaments. Also, to make planning dinners easier, you decide to create a record of your friends’ birthdays and their favorite entrees, sides, and desserts. Rather than keep this information in a physical ledger, you decide to exercise your database skills by recording it in a PostgreSQL database.

      To begin, open up a PostgreSQL prompt as your postgres superuser:

      Note: If you followed all the steps of the prerequisite tutorial on Installing PostgreSQL on Ubuntu 18.04, you may have configured a new role for your PostgreSQL installation. In this case, you can connect to the Postgres prompt with the following command, substituting sammy with your own username:

      Next, create the database by running:

      • CREATE DATABASE birthdays;

      Then select this database by typing:

      Next, create two tables within this database. We'll use the first table to track your friends' records at the bowling alley. The following command will create a table called tourneys with columns for the name of each of your friends, the number of tournaments they've won (wins), their all-time best score, and what size bowling shoe they wear (size):

      • CREATE TABLE tourneys (
      • name varchar(30),
      • wins real,
      • best real,
      • size real
      • );

      Once you run the CREATE TABLE command and populate it with column headings, you’ll receive the following output:

      Output

      CREATE TABLE

      Populate the tourneys table with some sample data:

      • INSERT INTO tourneys (name, wins, best, size)
      • VALUES ('Dolly', '7', '245', '8.5'),
      • ('Etta', '4', '283', '9'),
      • ('Irma', '9', '266', '7'),
      • ('Barbara', '2', '197', '7.5'),
      • ('Gladys', '13', '273', '8');

      You’ll receive the following output:

      Output

      INSERT 0 5

      Following this, create another table within the same database which we'll use to store information about your friends' favorite birthday meals. The following command creates a table named dinners with columns for the name of each of your friends, their birthdate, their favorite entree, their preferred side dish, and their favorite dessert:

      • CREATE TABLE dinners (
      • name varchar(30),
      • birthdate date,
      • entree varchar(30),
      • side varchar(30),
      • dessert varchar(30)
      • );

      Similarly for this table, you’ll receive feedback verifying that the table was created:

      Output

      CREATE TABLE

      Populate this table with some sample data as well:

      • INSERT INTO dinners (name, birthdate, entree, side, dessert)
      • VALUES ('Dolly', '1946-01-19', 'steak', 'salad', 'cake'),
      • ('Etta', '1938-01-25', 'chicken', 'fries', 'ice cream'),
      • ('Irma', '1941-02-18', 'tofu', 'fries', 'cake'),
      • ('Barbara', '1948-12-25', 'tofu', 'salad', 'ice cream'),
      • ('Gladys', '1944-05-28', 'steak', 'fries', 'ice cream');

      Output

      INSERT 0 5

      Once that command completes successfully, you're done setting up your database. Next, we'll go over the basic command structure of SELECT queries.

      Understanding SELECT Statements

      As mentioned in the introduction, SQL queries almost always begin with the SELECT statement. SELECT is used in queries to specify which columns from a table should be returned in the result-set. Queries also almost always include FROM, which is used to specify which table the statement will query.

      Generally, SQL queries follow this syntax:

      • SELECT column_to_select FROM table_to_select WHERE certain_conditions_apply;

      By way of example, the following statement will return the entire name column from the dinners table:

      • SELECT name FROM dinners;

      Output

      name --------- Dolly Etta Irma Barbara Gladys (5 rows)

      You can select multiple columns from the same table by separating their names with a comma, like this:

      • SELECT name, birthdate FROM dinners;

      Output

      name | birthdate ---------+------------ Dolly | 1946-01-19 Etta | 1938-01-25 Irma | 1941-02-18 Barbara | 1948-12-25 Gladys | 1944-05-28 (5 rows)

      Instead of naming a specific column or set of columns, you can follow the SELECT operator with an asterisk (*) which serves as a placeholder representing all the columns in a table. The following command returns every column from the tourneys table:

      Output

      name | wins | best | size ---------+------+------+------ Dolly | 7 | 245 | 8.5 Etta | 4 | 283 | 9 Irma | 9 | 266 | 7 Barbara | 2 | 197 | 7.5 Gladys | 13 | 273 | 8 (5 rows)

      WHERE is used in queries to filter records that meet a specified condition, and any rows that do not meet that condition are eliminated from the result. A WHERE clause typically follows this syntax:

      • . . . WHERE column_name comparison_operator value

      The comparison operator in a WHERE clause defines how the specified column should be compared against the value. Here are some common SQL comparison operators:

      Operator What it does
      = tests for equality
      != tests for inequality
      < tests for less-than
      > tests for greater-than
      <= tests for less-than or equal-to
      >= tests for greater-than or equal-to
      BETWEEN tests whether a value lies within a given range
      IN tests whether a row's value is contained in a set of specified values
      EXISTS tests whether rows exist, given the specified conditions
      LIKE tests whether a value matches a specified string
      IS NULL tests for NULL values
      IS NOT NULL tests for all values other than NULL

      For example, if you wanted to find Irma's shoe size, you could use the following query:

      • SELECT size FROM tourneys WHERE name = 'Irma';

      Output

      size ------ 7 (1 row)

      SQL allows the use of wildcard characters, and these are especially handy when used in WHERE clauses. Percentage signs (%) represent zero or more unknown characters, and underscores (_) represent a single unknown character. These are useful if you're trying to find a specific entry in a table, but aren't sure of what that entry is exactly. To illustrate, let's say that you've forgotten the favorite entree of a few of your friends, but you're certain this particular entree starts with a "t." You could find its name by running the following query:

      • SELECT entree FROM dinners WHERE entree LIKE 't%';

      Output

      entree ------- tofu tofu (2 rows)

      Based on the output above, we see that the entree we have forgotten is tofu.

      There may be times when you're working with databases that have columns or tables with relatively long or difficult-to-read names. In these cases, you can make these names more readable by creating an alias with the AS keyword. Aliases created with AS are temporary, and only exist for the duration of the query for which they're created:

      • SELECT name AS n, birthdate AS b, dessert AS d FROM dinners;

      Output

      n | b | d ---------+------------+----------- Dolly | 1946-01-19 | cake Etta | 1938-01-25 | ice cream Irma | 1941-02-18 | cake Barbara | 1948-12-25 | ice cream Gladys | 1944-05-28 | ice cream (5 rows)

      Here, we have told SQL to display the name column as n, the birthdate column as b, and the dessert column as d.

      The examples we've gone through up to this point include some of the more frequently-used keywords and clauses in SQL queries. These are useful for basic queries, but they aren't helpful if you're trying to perform a calculation or derive a scalar value (a single value, as opposed to a set of multiple different values) based on your data. This is where aggregate functions come into play.

      Aggregate Functions

      Oftentimes, when working with data, you don't necessarily want to see the data itself. Rather, you want information about the data. The SQL syntax includes a number of functions that allow you to interpret or run calculations on your data just by issuing a SELECT query. These are known as aggregate functions.

      The COUNT function counts and returns the number of rows that match a certain criteria. For example, if you'd like to know how many of your friends prefer tofu for their birthday entree, you could issue this query:

      • SELECT COUNT(entree) FROM dinners WHERE entree = 'tofu';

      Output

      count ------- 2 (1 row)

      The AVG function returns the average (mean) value of a column. Using our example table, you could find the average best score amongst your friends with this query:

      • SELECT AVG(best) FROM tourneys;

      Output

      avg ------- 252.8 (1 row)

      SUM is used to find the total sum of a given column. For instance, if you'd like to see how many games you and your friends have bowled over the years, you could run this query:

      • SELECT SUM(wins) FROM tourneys;

      Output

      sum ----- 35 (1 row)

      Note that the AVG and SUM functions will only work correctly when used with numeric data. If you try to use them on non-numerical data, it will result in either an error or just 0, depending on which RDBMS you're using:

      • SELECT SUM(entree) FROM dinners;

      Output

      ERROR: function sum(character varying) does not exist LINE 1: select sum(entree) from dinners; ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts.

      MIN is used to find the smallest value within a specified column. You could use this query to see what the worst overall bowling record is so far (in terms of number of wins):

      • SELECT MIN(wins) FROM tourneys;

      Output

      min ----- 2 (1 row)

      Similarly, MAX is used to find the largest numeric value in a given column. The following query will show the best overall bowling record:

      • SELECT MAX(wins) FROM tourneys;

      Output

      max ----- 13 (1 row)

      Unlike SUM and AVG, the MIN and MAX functions can be used for both numeric and alphabetic data types. When run on a column containing string values, the MIN function will show the first value alphabetically:

      • SELECT MIN(name) FROM dinners;

      Output

      min --------- Barbara (1 row)

      Likewise, when run on a column containing string values, the MAX function will show the last value alphabetically:

      • SELECT MAX(name) FROM dinners;

      Output

      max ------ Irma (1 row)

      Aggregate functions have many uses beyond what was described in this section. They're particularly useful when used with the GROUP BY clause, which is covered in the next section along with several other query clauses that affect how result-sets are sorted.

      Manipulating Query Outputs

      In addition to the FROM and WHERE clauses, there are several other clauses which are used to manipulate the results of a SELECT query. In this section, we will explain and provide examples for some of the more commonly-used query clauses.

      One of the most frequently-used query clauses, aside from FROM and WHERE, is the GROUP BY clause. It's typically used when you're performing an aggregate function on one column, but in relation to matching values in another.

      For example, let's say you wanted to know how many of your friends prefer each of the three entrees you make. You could find this info with the following query:

      • SELECT COUNT(name), entree FROM dinners GROUP BY entree;

      Output

      count | entree -------+--------- 1 | chicken 2 | steak 2 | tofu (3 rows)

      The ORDER BY clause is used to sort query results. By default, numeric values are sorted in ascending order, and text values are sorted in alphabetical order. To illustrate, the following query lists the name and birthdate columns, but sorts the results by birthdate:

      • SELECT name, birthdate FROM dinners ORDER BY birthdate;

      Output

      name | birthdate ---------+------------ Etta | 1938-01-25 Irma | 1941-02-18 Gladys | 1944-05-28 Dolly | 1946-01-19 Barbara | 1948-12-25 (5 rows)

      Notice that the default behavior of ORDER BY is to sort the result-set in ascending order. To reverse this and have the result-set sorted in descending order, close the query with DESC:

      • SELECT name, birthdate FROM dinners ORDER BY birthdate DESC;

      Output

      name | birthdate ---------+------------ Barbara | 1948-12-25 Dolly | 1946-01-19 Gladys | 1944-05-28 Irma | 1941-02-18 Etta | 1938-01-25 (5 rows)

      As mentioned previously, the WHERE clause is used to filter results based on specific conditions. However, if you use the WHERE clause with an aggregate function, it will return an error, as is the case with the following attempt to find which sides are the favorite of at least three of your friends:

      • SELECT COUNT(name), side FROM dinners WHERE COUNT(name) >= 3;

      Output

      ERROR: aggregate functions are not allowed in WHERE LINE 1: SELECT COUNT(name), side FROM dinners WHERE COUNT(name) >= 3...

      The HAVING clause was added to SQL to provide functionality similar to that of the WHERE clause while also being compatible with aggregate functions. It's helpful to think of the difference between these two clauses as being that WHERE applies to individual records, while HAVING applies to group records. To this end, any time you issue a HAVING clause, the GROUP BY clause must also be present.

      The following example is another attempt to find which side dishes are the favorite of at least three of your friends, although this one will return a result without error:

      • SELECT COUNT(name), side FROM dinners GROUP BY side HAVING COUNT(name) >= 3;

      Output

      count | side -------+------- 3 | fries (1 row)

      Aggregate functions are useful for summarizing the results of a particular column in a given table. However, there are many cases where it's necessary to query the contents of more than one table. We'll go over a few ways you can do this in the next section.

      Querying Multiple Tables

      More often than not, a database contains multiple tables, each holding different sets of data. SQL provides a few different ways to run a single query on multiple tables.

      The JOIN clause can be used to combine rows from two or more tables in a query result. It does this by finding a related column between the tables and sorts the results appropriately in the output.

      SELECT statements that include a JOIN clause generally follow this syntax:

      • SELECT table1.column1, table2.column2
      • FROM table1
      • JOIN table2 ON table1.related_column=table2.related_column;

      Note that because JOIN clauses compare the contents of more than one table, the previous example specifies which table to select each column from by preceding the name of the column with the name of the table and a period. You can specify which table a column should be selected from like this for any query, although it's not necessary when selecting from a single table, as we've done in the previous sections. Let's walk through an example using our sample data.

      Imagine that you wanted to buy each of your friends a pair of bowling shoes as a birthday gift. Because the information about your friends' birthdates and shoe sizes are held in separate tables, you could query both tables separately then compare the results from each. With a JOIN clause, though, you can find all the information you want with a single query:

      • SELECT tourneys.name, tourneys.size, dinners.birthdate
      • FROM tourneys
      • JOIN dinners ON tourneys.name=dinners.name;

      Output

      name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 (5 rows)

      The JOIN clause used in this example, without any other arguments, is an inner JOIN clause. This means that it selects all the records that have matching values in both tables and prints them to the results set, while any records that aren't matched are excluded. To illustrate this idea, let's add a new row to each table that doesn't have a corresponding entry in the other:

      • INSERT INTO tourneys (name, wins, best, size)
      • VALUES ('Bettye', '0', '193', '9');
      • INSERT INTO dinners (name, birthdate, entree, side, dessert)
      • VALUES ('Lesley', '1946-05-02', 'steak', 'salad', 'ice cream');

      Then, re-run the previous SELECT statement with the JOIN clause:

      • SELECT tourneys.name, tourneys.size, dinners.birthdate
      • FROM tourneys
      • JOIN dinners ON tourneys.name=dinners.name;

      Output

      name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 (5 rows)

      Notice that, because the tourneys table has no entry for Lesley and the dinners table has no entry for Bettye, those records are absent from this output.

      It is possible, though, to return all the records from one of the tables using an outer JOIN clause. Outer JOIN clauses are written as either LEFT JOIN, RIGHT JOIN, or FULL JOIN.

      A LEFT JOIN clause returns all the records from the “left” table and only the matching records from the right table. In the context of outer joins, the left table is the one referenced by the FROM clause, and the right table is any other table referenced after the JOIN statement.

      Run the previous query again, but this time use a LEFT JOIN clause:

      • SELECT tourneys.name, tourneys.size, dinners.birthdate
      • FROM tourneys
      • LEFT JOIN dinners ON tourneys.name=dinners.name;

      This command will return every record from the left table (in this case, tourneys) even if it doesn't have a corresponding record in the right table. Any time there isn't a matching record from the right table, it's returned as a blank value or NULL, depending on your RDBMS:

      Output

      name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 Bettye | 9 | (6 rows)

      Now run the query again, this time with a RIGHT JOIN clause:

      • SELECT tourneys.name, tourneys.size, dinners.birthdate
      • FROM tourneys
      • RIGHT JOIN dinners ON tourneys.name=dinners.name;

      This will return all the records from the right table (dinners). Because Lesley's birthdate is recorded in the right table, but there is no corresponding row for her in the left table, the name and size columns will return as blank values in that row:

      Output

      name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 | | 1946-05-02 (6 rows)

      Note that left and right joins can be written as LEFT OUTER JOIN or RIGHT OUTER JOIN, although the OUTER part of the clause is implied. Likewise, specifying INNER JOIN will produce the same result as just writing JOIN.

      There is a fourth join clause called FULL JOIN available for some RDBMS distributions, including PostgreSQL. A FULL JOIN will return all the records from each table, including any null values:

      • SELECT tourneys.name, tourneys.size, dinners.birthdate
      • FROM tourneys
      • FULL JOIN dinners ON tourneys.name=dinners.name;

      Output

      name | size | birthdate ---------+------+------------ Dolly | 8.5 | 1946-01-19 Etta | 9 | 1938-01-25 Irma | 7 | 1941-02-18 Barbara | 7.5 | 1948-12-25 Gladys | 8 | 1944-05-28 Bettye | 9 | | | 1946-05-02 (7 rows)

      Note: As of this writing, the FULL JOIN clause is not supported by either MySQL or MariaDB.

      As an alternative to using FULL JOIN to query all the records from multiple tables, you can use the UNION clause.

      The UNION operator works slightly differently than a JOIN clause: instead of printing results from multiple tables as unique columns using a single SELECT statement, UNION combines the results of two SELECT statements into a single column.

      To illustrate, run the following query:

      • SELECT name FROM tourneys UNION SELECT name FROM dinners;

      This query will remove any duplicate entries, which is the default behavior of the UNION operator:

      Output

      name --------- Irma Etta Bettye Gladys Barbara Lesley Dolly (7 rows)

      To return all entries (including duplicates) use the UNION ALL operator:

      • SELECT name FROM tourneys UNION ALL SELECT name FROM dinners;

      Output

      name --------- Dolly Etta Irma Barbara Gladys Bettye Dolly Etta Irma Barbara Gladys Lesley (12 rows)

      The names and number of the columns in the results table reflect the name and number of columns queried by the first SELECT statement. Note that when using UNION to query multiple columns from more than one table, each SELECT statement must query the same number of columns, the respective columns must have similar data types, and the columns in each SELECT statement must be in the same order. The following example shows what might result if you use a UNION clause on two SELECT statements that query a different number of columns:

      • SELECT name FROM dinners UNION SELECT name, wins FROM tourneys;

      Output

      ERROR: each UNION query must have the same number of columns LINE 1: SELECT name FROM dinners UNION SELECT name, wins FROM tourne...

      Another way to query multiple tables is through the use of subqueries. Subqueries (also known as inner or nested queries) are queries enclosed within another query. These are useful in cases where you're trying to filter the results of a query against the result of a separate aggregate function.

      To illustrate this idea, say you want to know which of your friends have won more matches than Barbara. Rather than querying how many matches Barbara has won then running another query to see who has won more games than that, you can calculate both with a single query:

      • SELECT name, wins FROM tourneys
      • WHERE wins > (
      • SELECT wins FROM tourneys WHERE name = 'Barbara'
      • );

      Output

      name | wins --------+------ Dolly | 7 Etta | 4 Irma | 9 Gladys | 13 (4 rows)

      The subquery in this statement was run only once; it only needed to find the value from the wins column in the same row as Barbara in the name column, and the data returned by the subquery and outer query are independent of one another. There are cases, though, where the outer query must first read every row in a table and compare those values against the data returned by the subquery in order to return the desired data. In this case, the subquery is referred to as a correlated subquery.

      The following statement is an example of a correlated subquery. This query seeks to find which of your friends have won more games than is the average for those with the same shoe size:

      • SELECT name, size FROM tourneys AS t
      • WHERE wins > (
      • SELECT AVG(wins) FROM tourneys WHERE size = t.size
      • );

      In order for the query to complete, it must first collect the name and size columns from the outer query. Then, it compares each row from that result set against the results of the inner query, which determines the average number of wins for individuals with identical shoe sizes. Because you only have two friends that have the same shoe size, there can only be one row in the result-set:

      Output

      name | size ------+------ Etta | 9 (1 row)

      As mentioned earlier, subqueries can be used to query results from multiple tables. To illustrate this with one final example, say you wanted to throw a surprise dinner for the group's all-time best bowler. You could find which of your friends has the best bowling record and return their favorite meal with the following query:

      • SELECT name, entree, side, dessert
      • FROM dinners
      • WHERE name = (SELECT name FROM tourneys
      • WHERE wins = (SELECT MAX(wins) FROM tourneys));

      Output

      name | entree | side | dessert --------+--------+-------+----------- Gladys | steak | fries | ice cream (1 row)

      Notice that this statement not only includes a subquery, but also contains a subquery within that subquery.

      Conclusion

      Issuing queries is one of the most commonly-performed tasks within the realm of database management. There are a number of database administration tools, such as phpMyAdmin or pgAdmin, that allow you to perform queries and visualize the results, but issuing SELECT statements from the command line is still a widely-practiced workflow that can also provide you with greater control.

      If you're new to working with SQL, we encourage you to use our SQL Cheat Sheet as a reference and to review the official PostgreSQL documenation. Additionally, if you'd like to learn more about SQL and relational databases, the following tutorials may be of interest to you:



      Source link

      How To Move a PostgreSQL Data Directory to a New Location on Ubuntu 18.04


      Introduction

      Databases grow over time, sometimes outgrowing the space on their original file system. When they’re located on the same partition as the rest of the operating system, this can also potentially lead to I/O contention.

      RAID, network block storage, and other devices can offer redundancy and improve scalability, along with other desirable features. Whether you’re adding more space, evaluating ways to optimize performance, or looking to take advantage of other storage features, this tutorial will guide you through relocating PostgreSQL’s data directory.

      Prerequisites

      To complete this guide, you will need:

      In this example, we’re moving the data to a block storage device mounted at /mnt/volume_nyc1_01. If you are using Block Storage on DigitalOcean, this guide can help you mount your volume before continuing with this tutorial.

      Regardless of what underlying storage you use, though, the following steps can help you move the data directory to a new location.

      Step 1 — Moving the PostgreSQL Data Directory

      Before we get started with moving PostgreSQL’s data directory, let’s verify the current location by starting an interactive PostgreSQL session. In the following command, psql is the command to enter the interactive monitor and -u postgres tells sudo to execute psql as the system’s postgres user:

      Once you have the PostgreSQL prompt opened up, use the following command to show the current data directory:

      Output

      data_directory ------------------------------ /var/lib/postgresql/10/main (1 row)

      This output confirms that PostgreSQL is configured to use the default data directory, /var/lib/postgresql/10/main, so that’s the directory we need to move. Once you've confirmed the directory on your system, type q and press ENTER to close the PostgreSQL prompt.

      To ensure the integrity of the data, stop PostgreSQL before you actually make changes to the data directory:

      • sudo systemctl stop postgresql

      systemctl doesn't display the outcome of all service management commands. To verify that you’ve successfully stopped the service, use the following command:

      • sudo systemctl status postgresql

      The final line of the output should tell you that PostgreSQL has been stopped:

      Output

      . . . Jul 12 15:22:44 ubuntu-512mb-nyc1-01 systemd[1]: Stopped PostgreSQL RDBMS.

      Now that the PostgreSQL server is shut down, we’ll copy the existing database directory to the new location with rsync. Using the -a flag preserves the permissions and other directory properties, while -v provides verbose output so you can follow the progress. We’re going to start the rsync from the postgresql directory in order to mimic the original directory structure in the new location. By creating that postgresql directory within the mount-point directory and retaining ownership by the PostgreSQL user, we can avoid permissions problems for future upgrades.

      Note: Be sure there is no trailing slash on the directory, which may be added if you use tab completion. If you do include a trailing slash, rsync will dump the contents of the directory into the mount point instead of copying over the directory itself.

      The version directory, 10, isn’t strictly necessary since we’ve defined the location explicitly in the postgresql.conf file, but following the project convention certainly won’t hurt, especially if there’s a need in the future to run multiple versions of PostgreSQL:

      • sudo rsync -av /var/lib/postgresql /mnt/volume_nyc1_01

      Once the copy is complete, we'll rename the current folder with a .bak extension and keep it until we’ve confirmed that the move was successful. This will help to avoid confusion that could arise from having similarly-named directories in both the new and the old location:

      • sudo mv /var/lib/postgresql/10/main /var/lib/postgresql/10/main.bak

      Now we’re ready to configure PostgreSQL to access the data directory in its new location.

      Step 2 — Pointing to the New Data Location

      By default, the data_directory is set to /var/lib/postgresql/10/main in the /etc/postgresql/10/main/postgresql.conf file. Edit this file to reflect the new data directory:

      • sudo nano /etc/postgresql/10/main/postgresql.conf

      Find the line that begins with data_directory and change the path which follows to reflect the new location. In the context of this tutorial, the updated directive will look like this:

      /etc/postgresql/10/main/postgresql.conf

      . . .
      data_directory = '/mnt/volume_nyc1_01/postgresql/10/main'
      . . .
      

      Save and close the file by pressing CTRL + X, Y, then ENTER. This is all you need to do to configure PostgreSQL to use the new data directory location. All that’s left at this point is to start the PostgreSQL service again and check that it is indeed pointing to the correct data directory.

      Step 3 — Restarting PostgreSQL

      After changing the data-directory directive in the postgresql.conf file, go ahead and start the PostgreSQL server using systemctl:

      • sudo systemctl start postgresql

      To confirm that the PostgreSQL server started successfully, check its status by again using systemctl:

      • sudo systemctl status postgresql

      If the service started correctly, you will see the following line at the end of this command’s output:

      Output

      . . . Jul 12 15:45:01 ubuntu-512mb-nyc1-01[1]: Started PostgreSQL RDBMS. . . .

      Lastly, to make sure that the new data directory is indeed in use, open the PostgreSQL command prompt.

      Check the value for the data directory again:

      Output

      data_directory ----------------------------------------- /mnt/volume_nyc1_01/postgresql/10/main (1 row)

      This confirms that PostgreSQL is using the new data directory location. Following this, take a moment to ensure that you’re able to access your database as well as interact with the data within. Once you’ve verified the integrity of any existing data, you can remove the backup data directory:

      • sudo rm -Rf /var/lib/postgresql/10/main.bak

      With that, you have successfully moved your PostgreSQL data directory to a new location.

      Conclusion:

      If you’ve followed along, your database should be running with its data directory in the new location and you’ve completed an important step toward being able to scale your storage. You might also want to take a look at 5 Common Server Setups For Your Web Application for ideas on how to create a server infrastructure to help you scale and optimize web applications.



      Source link