One place for hosting & domains

      December 2020

      How To Use Comparison and IS NULL Operators in SQL


      Introduction

      In certain Structured Query Language (SQL) statements, WHERE clauses can be used to limit what rows the given operation will affect. They do this by defining specific criteria that each row must meet for it to be impacted, known as a search condition. Search conditions are made up of one or more predicates, which are special expressions that evaluate to either “true,” “false,” or “unknown,” and operations only affect those rows for which every predicate in the WHERE clause evaluates to “true.”

      SQL allows users to write search conditions that include a variety of different types of predicates, each of which use a specific operator to evaluate rows. This guide will outline two types of predicates and the operators they use: comparison operators and the IS NULL operator.

      Although this guide will exclusively use SELECT statements in its examples, the concepts explained here can be used in a number of SQL operations. In particular, WHERE clauses and their search conditions are critical components of UPDATE and DELETE operations.

      Prerequisites

      In order to follow this guide, you will need a computer running some type of relational database management system (RDBMS) that uses SQL. The instructions and examples in this guide were validated using the following environment:

      • A server running Ubuntu 20.04, with a non-root user with administrative privileges and a firewall configured with UFW, as described in our initial server setup guide for Ubuntu 20.04.
      • MySQL installed and secured on the server, as outlined in How To Install MySQL on Ubuntu 20.04. This guide was verified with a newly-created user, as described in Step 3.
      • You’ll also need a database with some tables loaded with sample data which you can use to practice using the comparison and IS NULL operators. We encourage you to go through the following Connecting to MySQL and Setting up a Sample Database section for details on how to connect to a MySQL server and create the testing database used in examples throughout this guide.

      Note: Please note that many RDBMSs use their own unique implementations of SQL. Although the commands outlined in this tutorial will work on most RDBMSs, including PostgreSQL and SQLite, the exact syntax or output may differ if you test them on a system other than MySQL.

      Connecting to MySQL and Setting up a Sample Database

      If your SQL database system runs on a remote server, SSH into your server from your local machine:

      Then open up the database server prompt. If you’re using MySQL, do so by running the following command, making sure to replace sammy with the name of your MySQL user account:

      From the prompt, create a database named comparison_null_db:

      • CREATE DATABASE comparison_null_db;

      If the database was created successfully, you’ll receive output like this:

      Output

      Query OK, 1 row affected (0.01 sec)

      To select the comparison_null_db database, run the following USE statement:

      Output

      Database changed

      After selecting comparison_null_db, create a table within it.

      To follow along with the examples used in this guide, imagine that you and a group of your friends all decide to become more physically active and take up running as exercise. To this end, your friends all set a personal goal for how many miles they want to run in the next month. You decide to track your friends’ mileage goals, as well as the number of miles they actually ran, in an SQL table that has the following three columns:

      • name: each of your friends’ names, expressed using the varchar data type with a maximum of 15 characters
      • goal: each friends’ goal for how many miles they hoped to run over the past month, expressed as an integer using the int data type
      • result: the number of miles each friend ultimately ran over the course of the month, again expressed as an int

      Run the following CREATE TABLE statement to create a table named running_goals that has these three columns:

      • CREATE TABLE running_goals (
      • name varchar(15),
      • goal int,
      • result int
      • );

      Output

      Query OK, 0 rows affected (0.012 sec)

      Then load the running_goals table with some sample data. Run the following INSERT INTO operation to add seven rows of data representing seven of your friends, their running goals, and their results:

      • INSERT INTO running_goals
      • VALUES
      • ('Michelle', 55, 48),
      • ('Jerry', 25, NULL),
      • ('Milton', 45, 52),
      • ('Bridget', 40, NULL),
      • ('Wanda', 30, 38),
      • ('Stewart', 35, NULL),
      • ('Leslie', 40, 44);

      Output

      Query OK, 7 rows affected (0.004 sec) Records: 7 Duplicates: 0 Warnings: 0

      Note that three of these rows’ result values are NULL. For the purposes of this example, assume that these friends just haven’t reported the number of miles they ran over the past month so their result values were entered as NULL.

      With that, you’re ready to follow the rest of the guide and begin learning how to use the comparison and IS NULL operators in SQL.

      Understanding WHERE Clause Predicates

      In any SQL operation that reads data from an existing table, you can follow the FROM clause with a WHERE clause to limit what data the operation will affect. WHERE clauses do this by defining a search condition; any row that doesn’t meet the search condition is excluded from the operation, but any row that does is included.

      A search condition is made up of one or more predicates, or expressions that can evaluate one or more value expressions and return a result of either “true,” “false,” or “unknown.” In SQL, a value expression — also sometimes referred to as a scalar expression — is any expression that will return a single value. A value expression can be a literal value, like a string or numeric value, a mathematical expression, or a column name. Note that it’s almost always the case that at least one value expression in a WHERE clause predicate is the name of a column in the table referenced in the operation’s FROM clause.

      When running SQL queries that contain a WHERE clause, the DBMS will apply the search condition to every row in the logical table defined by the FROM clause. It will then return only the rows for which every predicate in the search condition evaluates to “true.”

      The SQL standard defines 18 types of predicates, although not every RDBMS includes each of them in its implementation of SQL. Here are five of the most commonly used predicate types, as well as a brief explanation of each one and the operators they use:

      Comparison: Comparison predicates compare one value expression with another; in queries, it’s almost always the case that at least one of these value expressions is the name of a column. The six comparison operators are:

      • =: tests whether the two values are equivalent
      • <>: tests whether two values are not equivalent
      • <: tests whether the first value is less than the second
      • >: tests whether the first value is greater than the second
      • <=: tests whether the first value is less than or equal to the second
      • >=: tests whether the first value is greater than or equal to the second

      Null: Predicates that use the IS NULL operator test whether values in a given column are Null
      Range: Range predicates use the BETWEEN operator to test whether one value expression falls between two others
      Membership: This type of predicate uses the IN operator to test whether a value is a member of a given set
      Pattern Match: Pattern matching predicates use the LIKE operator to test whether a value matches a string pattern

      As mentioned in the introduction, this guide focuses on how to use SQL’s comparison and IS NULL operators to filter data. If you’d like to learn how to use the BETWEEN or IN operators with range and membership predicates, respectively, we encourage you to check out this guide on How To Use the BETWEEN and IN Operators in SQL. Alternatively, if you’d like to learn how to use the LIKE operator to filter data based on a string pattern containing wildcard characters, follow our guide on How To Use Wildcards in SQL. Lastly, if you’d like to learn more about WHERE clauses generally, you may be interested in our tutorial on How To Use WHERE Clauses in SQL.

      Comparison Predicates

      WHERE clause comparison predicates use one of six comparison operators to compare one value expression with another. They typically follow a syntax like this:

      • SELECT column_list
      • FROM table_name
      • WHERE column_name OPERATOR value_expression;

      Following the WHERE keyword is a value expression which, in most SQL operations, is the name of a column. Providing a column name as a value expression in a search condition tells the RDBMS to use each row’s value from that column as the value expression for that row’s iteration of the search condition. Because the database system applies search conditions to each row in sequence, the comparison operator will then include or filter out a row based on whether the search condition is true for its value from the specified column.

      To illustrate, run the following query which will return values from the running_goals table’s name and goal columns. Note how the WHERE clause uses a comparison predicate that will cause the query to only return rows whose goal value is equal to 40:

      • SELECT name, goal
      • FROM running_goals
      • WHERE goal = 40;

      Only two of your friends’ goals were to run exactly 40 miles over the past month, so the query returns just those two rows:

      Output

      +---------+------+ | name | goal | +---------+------+ | Bridget | 40 | | Leslie | 40 | +---------+------+ 2 rows in set (0.00 sec)

      To illustrate how the other comparison operators work, run the following queries which are identical to the previous example except that each uses a different comparison operator.

      The <> operator tests whether two values are not equivalent, so this query returns every row whose goal value is not equal to 40:

      • SELECT name, goal
      • FROM running_goals
      • WHERE goal <> 40;

      Output

      +----------+------+ | name | goal | +----------+------+ | Michelle | 55 | | Jerry | 25 | | Milton | 45 | | Wanda | 30 | | Stewart | 35 | +----------+------+ 5 rows in set (0.00 sec)

      The < operator tests whether the first value expression is less than the second:

      • SELECT name, goal
      • FROM running_goals
      • WHERE goal < 40;

      Output

      +---------+------+ | name | goal | +---------+------+ | Jerry | 25 | | Wanda | 30 | | Stewart | 35 | +---------+------+ 3 rows in set (0.00 sec)

      The > operator tests whether the first value expression is greater than the second:

      • SELECT name, goal
      • FROM running_goals
      • WHERE goal > 40;

      Output

      +----------+------+ | name | goal | +----------+------+ | Michelle | 55 | | Milton | 45 | +----------+------+ 2 rows in set (0.00 sec)

      The <= operator tests whether the first value is less than or equal to the second:

      • SELECT name, goal
      • FROM running_goals
      • WHERE goal <= 40;

      Output

      +---------+------+ | name | goal | +---------+------+ | Jerry | 25 | | Bridget | 40 | | Wanda | 30 | | Stewart | 35 | | Leslie | 40 | +---------+------+ 5 rows in set (0.00 sec)

      The >= operator tests whether the first value is greater than or equal to the second:

      • SELECT name, goal
      • FROM running_goals
      • WHERE goal >= 40;

      Output

      +----------+------+ | name | goal | +----------+------+ | Michelle | 55 | | Milton | 45 | | Bridget | 40 | | Leslie | 40 | +----------+------+ 4 rows in set (0.00 sec)

      The equivalence (=) and inequivalence (<>) operators work with string values as one might expect. The following query returns every row’s name value that’s equal to 'Leslie':

      • SELECT name
      • FROM running_goals
      • WHERE name="Leslie";

      Because there’s only one friend in the table named “Leslie,” the query just returns that row:

      Output

      +--------+ | name | +--------+ | Leslie | +--------+ 1 row in set (0.00 sec)

      When comparing string values, the <, >, <=, and >= operators all evaluate how the strings relate alphabetically. Put differently, if you write a predicate that tests whether one string is “less than” another, you’re testing whether the first string comes before the second alphabetically. Likewise, if your predicate tests whether one string is “greater than” another, you’re testing whether the first string comes after the second one alphabetically.

      To illustrate, run the following query. This will return the name and goal values of every row whose name value is “less than” the letter 'M'. In other words, the search condition will evaluate to “true” for every row whose name value comes before M alphabetically:

      • SELECT name
      • FROM running_goals
      • WHERE name < 'M';

      Output

      +---------+ | name | +---------+ | Jerry | | Bridget | | Leslie | +---------+ 3 rows in set (0.00 sec)

      Notice that this result set doesn’t include Michelle or Milton. This is because, alphabetically, the single letter “M” comes before any string that starts with the letter “M” and has more than one letter, so these two friends are excluded from this result set.

      Null Predicates

      In SQL, NULL is a reserved keyword used to represent missing or unknown values. Null is a state, rather than an actual value; it does not represent zero or an empty string.

      You can use the IS NULL operator to test whether a given value expression is Null:

      • . . .
      • WHERE column_name IS NULL
      • . . .

      With this type of predicate, the database system will look at every row’s value from the specified column and evaluate whether or not each one is Null. If values in the column are indeed Null, the search condition will evaluate to “true” for those rows and they will be included in the result set.

      To illustrate, run the following query which returns the name and result columns:

      • SELECT name, result
      • FROM running_goals
      • WHERE result IS NULL;

      The search condition in this query’s WHERE clause tests whether each row’s result value is Null. If so, the predicate evaluates to “true” and the row is included in the result set:

      Output

      +---------+--------+ | name | result | +---------+--------+ | Jerry | NULL | | Bridget | NULL | | Stewart | NULL | +---------+--------+ 3 rows in set (0.00 sec)

      Because three of your friends haven’t yet reported the number of miles they ultimately ran over the past month, those values were recorded as NULL when you loaded the table with data. Consequently, the search condition in the query evaluates to “true” for these three rows, so they’re the only ones included in the result set.

      Conclusion

      By following this guide, you learned how to use SQL’s comparison and IS NULL operators in WHERE clauses to limit the rows that an operation will affect. While the commands shown here should work on most relational databases, be aware that every SQL database uses its own unique implementation of the SQL standard. You should consult your DBMS’s official documentation for a more complete description of each command and their full sets of options.

      If you’d like to learn more about working with SQL, we encourage you to check out the other tutorials in this series on How To Use SQL.



      Source link

      How To Use the BETWEEN and IN Operators in SQL


      Introduction

      In certain Structured Query Language (SQL) statements, WHERE clauses can be used to limit what rows the given operation will affect. They do this by defining specific criteria that each row must meet for it to be impacted, known as a search condition. Search conditions are made up of one or more predicates, or special expressions that evaluate to either “true,” “false,” or “unknown,” and operations only affect those rows for which every predicate in the WHERE clause evaluates to “true.”

      SQL allows users to retrieve granular result sets by providing a variety of different types of predicates, each of which use a specific operator to evaluate rows. This guide will outline two types of predicates: range predicates which use the BETWEEN operator, and set membership predicates which use the IN operator.

      Although this guide will exclusively use SELECT statements in its examples, the concepts explained here can be used in a number of SQL operations. In particular, WHERE clauses and their search conditions are critical components of UPDATE and DELETE operations.

      Prerequisites

      To follow this guide, you will need a computer running some type of relational database management system (RDBMS) that uses SQL. The instructions and examples in this guide were validated using the following environment:

      • A server running Ubuntu 20.04, with a non-root user with administrative privileges and a firewall configured with UFW, as described in our initial server setup guide for Ubuntu 20.04.
      • MySQL installed and secured on the server, as outlined in How To Install MySQL on Ubuntu 20.04. This guide was verified with a newly-created user, as described in Step 3.
      • You’ll also need a database with some tables loaded with sample data which you can use to practice using different WHERE clause predicates. We encourage you to go through the following Connecting to MySQL and Setting up a Sample Database section for details on how to connect to a MySQL server and create the testing database used in examples throughout this guide.

      Note: Please note that many RDBMSs use their own unique implementations of SQL. Although the commands outlined in this tutorial will work on most RDBMSs, including PostgreSQL and SQLite, the exact syntax or output may differ if you test them on a system other than MySQL.

      Connecting to MySQL and Setting up a Sample Database

      If your SQL database system runs on a remote server, SSH into your server from your local machine:

      Then open up the database server prompt. If you’re using MySQL, do so by running the following command, making sure to replace sammy with the name of your MySQL user account:

      From the prompt, create a database named between_in_db:

      • CREATE DATABASE between_in_db;

      If the database was created successfully, you’ll receive output like this:

      Output

      Query OK, 1 row affected (0.01 sec)

      To select the between_in_db database, run the following USE statement:

      Output

      Database changed

      After selecting between_in_db, create a table within it.

      To follow along with the examples used in this guide, imagine that you manage a company’s sales team. This company only sells three products: widgets, doodads, and gizmos. You begin tracking the number of units of each product each member of your team has sold in an SQL database. You decide that this database will have one table with four columns:

      • name: the names of each member of your sales team, expressed using the varchar data type with a maximum of 20 characters
      • widgets: the total number of widgets each salesperson has sold, expressed with the int data type
      • doodads: the number of doodads each salesperson has sold, also expressed as an int
      • gizmos: the number of gizmos each salesperson has sold, again expressed as an int

      Run the following CREATE TABLE statement to create a table named sales that has these four columns:

      • CREATE TABLE sales (
      • name varchar(20),
      • widgets int,
      • doodads int,
      • gizmos int
      • );

      Output

      Query OK, 0 rows affected (0.01 sec)

      Then load the sales table with some sample data. Run the following INSERT INTO operation to add seven rows of data representing the team’s salespeople and the number of each product that they’ve sold:

      • INSERT INTO sales
      • VALUES
      • ('Tyler', 12, 22, 18),
      • ('Blair', 19, 8, 13),
      • ('Lynn', 7, 29, 3),
      • ('Boris', 16, 16, 15),
      • ('Lisa', 17, 2, 31),
      • ('Maya', 5, 9, 7),
      • ('Henry', 14, 2, 0);

      With that, you’re ready to follow the rest of the guide and begin learning how to use the BETWEEN and IN operators to filter data.

      Understanding WHERE Clause Predicates

      In any SQL operation that reads data from an existing table, you can follow the FROM clause with a WHERE clause to limit what data the operation will affect. WHERE clauses do this by defining a search condition; any row that doesn’t meet the search condition is excluded from the operation, but any row that does is included.

      A search condition is made up of one or more predicates, or expressions that can evaluate one or more value expressions and return a result of either “true,” “false,” or “unknown.” In SQL, a value expression — also sometimes referred to as a scalar expression — is any expression that will return a single value. A value expression can be a literal value, like a string or numeric value, a mathematical expression, or a column name. Note that it’s almost always the case that at least one value expression in a WHERE clause predicate is the name of a column in the table referenced in the operation’s FROM clause.

      When running SQL queries that contain a WHERE clause, the DBMS will apply the search condition to every row in the logical table defined by the FROM clause. It will then return only the rows for which every predicate in the search condition evaluates to “true.”

      The SQL standard defines 18 types of predicates, although not every RDBMS includes each of them in its implementation of SQL. Here are five of the most commonly used predicate types, as well as a brief explanation of each one and the operators they use:

      Comparison: Comparison predicates compare one value expression with another; in queries, it’s almost always the case that at least one of these value expressions is the name of a column. The six comparison operators are:

      • =: tests whether the two values are equivalent
      • <>: tests whether two values are not equivalent
      • <: tests whether the first value is less than the second
      • >: tests whether the first value is greater than the second
      • <=: tests whether the first value is less than or equal to the second
      • >=: tests whether the first value is greater than or equal to the second

      Null: Predicates that use the IS NULL operator test whether values in a given column are Null
      Range: Range predicates use the BETWEEN operator to test whether one value expression falls between two others
      Membership: This type of predicate uses the IN operator to test whether a value is a member of a given set
      Pattern Match: Pattern matching predicates use the LIKE operator to test whether a value matches a string pattern

      As mentioned in the introduction, this guide focuses on outlining how to use SQL’s BETWEEN and IN operators to filter data. If you’d like to learn how to use the comparison or IS NULL operators, we encourage you to check out this guide on How To Use Comparison and IS NULL Operators in SQL. Alternatively, if you’d like to learn how to use the LIKE operator to filter data based on a string pattern containing wildcard characters, follow our guide on How To Use Wildcards in SQL. Lastly, if you’d like to learn more about WHERE clauses generally, you may be interested in our tutorial on How To Use WHERE Clauses in SQL.

      Range Predicates

      Range predicates use the BETWEEN operator to test whether one value expression falls between two other value expressions. A WHERE clause that includes a range predicate in its search condition will follow this general syntax:

      • SELECT column_list
      • FROM table_name
      • WHERE column_name BETWEEN value_expression1 AND value_expression2;

      Following the WHERE keyword is a value expression which, in most SQL operations, is the name of a column. Because the database system applies search conditions to each row in sequence, providing a column name as a value expression in a search condition tells the RDBMS to use each row’s value from that column as the value expression for that row’s iteration of the search condition.

      After the column name comes the BETWEEN operator and two more value expressions separated by AND. The search condition will resolve to “true” for any rows whose value from the specified column is greater than or equal to the first of the two values separated by AND, but less than or equal to the second.

      To illustrate how range predicates work, run the following query. This will return the name and widgets columns of any rows whose widgets value is between 14 and 19, inclusive:

      • SELECT name, widgets
      • FROM sales
      • WHERE widgets BETWEEN 14 AND 19;

      Output

      +-------+---------+ | name | widgets | +-------+---------+ | Blair | 19 | | Boris | 16 | | Lisa | 17 | | Henry | 14 | +-------+---------+ 4 rows in set (0.00 sec)

      Keep in mind that the range you define after the BETWEEN operator can consist of any pair of value expressions, including column names.

      The following query returns every column from the sales table. Rather than listing out every column to return, it instead follows the SELECT keyword with an asterisk (*); this is SQL shorthand for “every column.” This query’s WHERE clause limits it to returning only rows whose gizmos value is greater than its doodads value but less than its widgets value:

      • SELECT *
      • FROM sales
      • WHERE gizmos BETWEEN doodads AND widgets;

      Only one member of the sales team has a gizmos value that falls between their widgets and doodads values, so only that row appears in the result set:

      Output

      +-------+---------+---------+--------+ | name | widgets | doodads | gizmos | +-------+---------+---------+--------+ | Blair | 19 | 8 | 13 | +-------+---------+---------+--------+ 1 row in set (0.00 sec)

      Be aware of the order in which you list the value expressions that define the range: the first value after the BETWEEN operator is always the lower end of the range and the second is always the upper end. The following query is identical to the previous one, except that it flips the order of the columns defining each end of the range:

      • SELECT *
      • FROM sales
      • WHERE gizmos BETWEEN widgets AND doodads;

      This time, the query returns the two rows where the gizmos value is greater than or equal to the row’s widgets value but less than or equal its doodads value. As this output indicates, changing the order like this will return a completely different result set:

      Output

      +-------+---------+---------+--------+ | name | widgets | doodads | gizmos | +-------+---------+---------+--------+ | Tyler | 12 | 22 | 18 | | Maya | 5 | 9 | 7 | +-------+---------+---------+--------+ 2 rows in set (0.00 sec)

      Like the <, >, <=, and >= comparison operators, when used to evaluate a column holding string values the BETWEEN operator will determine whether those values fall between two string values alphabetically.

      To illustrate, run the following query which returns the name values from any row in the sales table whose name value is between the letters A and M, alphabetically.

      This example uses two string literals as the value expressions that make up either end of the range. Note that these literal values must be wrapped in single or double quotes; otherwise, the DBMS will look for columns named A and M and the query will fail:

      • SELECT name
      • FROM sales
      • WHERE name BETWEEN 'A' AND 'M';

      Output

      +-------+ | name | +-------+ | Blair | | Lynn | | Boris | | Lisa | | Henry | +-------+ 5 rows in set (0.00 sec)

      Notice that this result set doesn’t include Maya even though the range provided in the search condition is from A to M. This is because, alphabetically, the letter “M” comes before any string that starts with the letter “M” and has more than one letter, so Maya is excluded from this result set along with any other salespeople whose names do not lie within the given range.

      Membership Predicates

      Membership predicates allow you to filter query results based on whether a value is a member of a specified set of data. In WHERE clauses, they generally follow this syntax:

      • . . .
      • WHERE column_name IN (set_of_data)
      • . . .

      After the WHERE keyword comes a value expression; again, this first value expression is usually the name of a column. Following that is the IN operator, itself followed by a set of data. You can explicitly define this set by listing any number of valid value expressions separated by commas, including literals or column names, or mathematical expressions involving either of these.

      To illustrate, run the following query. This will return the name and gizmos columns for every row whose gizmos value is a member of the set defined after the IN operator:

      • SELECT name, doodads
      • FROM sales
      • WHERE doodads IN (1, 2, 11, 12, 21, 22);

      Only three members of the sales team’s doodads values scores are equal to any of the values in this set, so only those rows get returned:

      Output

      +-------+---------+ | name | doodads | +-------+---------+ | Tyler | 22 | | Lisa | 2 | | Henry | 2 | +-------+---------+ 3 rows in set (0.00 sec)

      Instead of writing out each member of a set yourself, you can derive a set by following the IN operator with a subquery. A subquery — also known as a nested or inner query — is a SELECT statement embedded within one of the clauses of another SELECT statement. A subquery can retrieve information from any table in the same database as the table defined in the FROM clause of the “outer” operation.

      Note: When writing a subquery to define a set as part of a membership predicate, make sure that you use a scalar subquery, or a subquery that only returns a single column. Database management systems generally disallow subqueries that return multiple columns in a membership predicate, as it wouldn’t be clear to the database system which column it should evaluate as the set.

      As an example of using a subquery to define a set in a membership predicate, run the following statement to create a table named example_set_table that only has one column. This column will be named prime_numbers and will hold values of the int data type:

      • CREATE TABLE example_set_table (
      • prime_numbers int
      • );

      Then load this table with a couple rows of sample data. In keeping with the name of the table’s sole column, the following INSERT statement will load ten rows of data into the table, with each holding one of the first ten prime numbers:

      • INSERT INTO example_set_table
      • VALUES
      • (2),
      • (3),
      • (5),
      • (7),
      • (11),
      • (13),
      • (17),
      • (19),
      • (23),
      • (29);

      Then run the following query. This returns values from the name and widgets columns from the sales table, and its WHERE clause tests whether each value in the widgets column is in the set derived by the subquery SELECT prime_numbers FROM example_set_table:

      • SELECT name, widgets
      • FROM sales
      • WHERE widgets IN (SELECT prime_numbers FROM example_set_table);

      Output

      +-------+---------+ | name | widgets | +-------+---------+ | Blair | 19 | | Lynn | 7 | | Lisa | 17 | | Maya | 5 | +-------+---------+ 4 rows in set (0.00 sec)

      Because only four salespeople have sold a number of widgets equal to any of the prime numbers stored in the example_set_table, this query only returns those four rows.

      Conclusion

      By following this guide, you learned how to use SQL’s BETWEEN operator to test whether values in a column fall within a given range. You also learned how to use the IN operator to test whether values in a column are members of a set.

      While the commands shown here should work on most relational databases, be aware that every SQL database uses its own unique implementation of the language. You should consult your DBMS’s official documentation for a more complete description of each command and their full sets of options.

      If you’d like to learn more about working with SQL, we encourage you to check out the other tutorials in this series on How To Use SQL.



      Source link

      How To Migrate a Docker Compose Workflow for Rails Development to Kubernetes


      Introduction

      When building modern, stateless applications, containerizing your application’s components is the first step in deploying and scaling on distributed platforms. If you have used Docker Compose in development, you will have modernized and containerized your application by:

      • Extracting necessary configuration information from your code.
      • Offloading your application’s state.
      • Packaging your application for repeated use.

      You will also have written service definitions that specify how your container images should run.

      To run your services on a distributed platform like Kubernetes, you will need to translate your Compose service definitions to Kubernetes objects. This will allow you to scale your application with resiliency. One tool that can speed up the translation process to Kubernetes is kompose, a conversion tool that helps developers move Compose workflows to container orchestrators like Kubernetes or OpenShift.

      In this tutorial, you will translate Compose services to Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Rails application with a PostgreSQL database running on a Kubernetes cluster. This setup will mirror the functionality of the code described in Containerizing a Ruby on Rails Application for Development with Docker Compose and will be a good starting point to build out a production-ready solution that will scale with your needs.

      Prerequisites

      Step 1 — Installing kompose

      To begin using kompose, navigate to the project’s GitHub Releases page, and copy the link to the current release (version 1.22.0 as of this writing). Paste this link into the following curl command to download the latest version of kompose:

      • curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose

      For details about installing on non-Linux systems, please refer to the installation instructions.

      Make the binary executable:

      Move it to your PATH:

      • sudo mv ./kompose /usr/local/bin/kompose

      To verify that it has been installed properly, you can do a version check:

      If the installation was successful, you will see output like the following:

      Output

      1.22.0 (955b78124)

      With kompose installed and ready to use, you can now clone the Node.js project code that you will be translating to Kubernetes.

      Step 2 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to clone the project code and package the application so that the kubelet service can pull the image.

      Our first step will be to clone the rails-sidekiq repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose, which uses a demo Rails application to demonstrate how to set up a development environment using Docker Compose. You can find more information about the application itself in the series Rails on Containers.

      Clone the repository into a directory called rails_project:

      • git clone https://github.com/do-community/rails-sidekiq.git rails_project

      Navigate to the rails_project directory:

      Now checkout the code for this tutorial from the compose-workflow branch:

      • git checkout compose-workflow

      Output

      Branch 'compose-workflow' set up to track remote branch 'compose-workflow' from 'origin'. Switched to a new branch 'compose-workflow'

      The rails_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a PostgreSQL database.

      For more information about designing modern, stateless applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      The project directory includes a Dockerfile with instructions for building the application image. Let’s build the image now so that you can push it to your Docker Hub account and use it in your Kubernetes setup.

      Using the docker build command, build the image with the -t flag, which allows you to tag it with a memorable name. In this case, tag the image with your Docker Hub username and name it rails-kubernetes or a name of your own choosing:

      • docker build -t your_dockerhub_user/rails-kubernetes .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_user/rails-kubernetes latest 24f7e88b6ef2 2 days ago 606MB alpine latest d6e46aa2470d 6 weeks ago 5.57MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_user

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user’s home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_user with your own Docker Hub username:

      • docker push your_dockerhub_user/rails-kubernetes

      You now have an application image that you can pull to run your application with Kubernetes. The next step will be to translate your application service definitions to Kubernetes objects.

      Step 3 — Translating Compose Services to Kubernetes Objects with kompose

      Our Docker Compose file, here called docker-compose.yml, lays out the definitions that will run our services with Compose. A service in Compose is a running container, and service definitions contain information about how each container image will run. In this step, we will translate these definitions to Kubernetes objects by using kompose to create yaml files. These files will contain specs for the Kubernetes objects that describe their desired state.

      We will use these files to create different types of objects: Services, which will ensure that the Pods running our containers remain accessible; Deployments, which will contain information about the desired state of our Pods; a PersistentVolumeClaim to provision storage for our database data; a ConfigMap for environment variables injected at runtime; and a Secret for our application’s database user and password. Some of these definitions will be in the files kompose will create for us, and others we will need to create ourselves.

      First, we will need to modify some of the definitions in our docker-compose.yml file to work with Kubernetes. We will include a reference to our newly-built application image in our app service definition and remove the bind mounts, volumes, and additional commands that we used to run the application container in development with Compose. Additionally, we’ll redefine both containers’ restart policies to be in line with the behavior Kubernetes expects.

      If you have followed the steps in this tutorial and checked out the compose-workflow branch with git, then you should have a docker-compose.yml file in your working directory.

      If you don’t have a docker-compose.yml then be sure to visit the previous tutorial in this series, Containerizing a Ruby on Rails Application for Development with Docker Compose, and paste the contents from the linked section into a new docker-compose.yml file.

      Open the file with nano or your favorite editor:

      The current definition for the app application service looks like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Make the following edits to your service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      The finished service definition will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Next, scroll down to the database service definition and make the following edits:

      • Remove the - ./init.sql:/docker-entrypoint-initdb.d/init.sql volume line. Instead of using values from the local SQL file, we will pass the values for our POSTGRES_USER and POSTGRES_PASSWORD to the database container using the Secret we will create in Step 4.
      • Add a ports: section that will make PostgreSQL available inside your Kubernetes cluster on port 5432.
      • Add an environment: section with a PGDATA variable that points to a directory inside /var/lib/postgresql/data. This setting is required when PostgreSQL is configured to use block storage, since the database engine expects to find its data files in a sub-directory.

      The database service definition should look like this when you are finished editing it:

      ~/rails_project/docker-compose.yml

      . . .
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
          ports:
            - "5432:5432"
          environment:
            PGDATA: /var/lib/postgresql/data/pgdata
      . . .
      

      Next, edit the redis service definition to expose its default TCP port by adding a ports: section with the default 6379 port. Adding the ports: section will make Redis available inside your Kubernetes cluster. Your edited redis service should resemble the following:

      ~/rails_project/docker-compose.yml

      . . .
        redis:
          image: redis:5.0.7
          ports:
            - "6379:6379"
      

      After editing the redis section of the file, continue to the sidekiq service definition. Just as with the app service, you’ll need to switch from building a local docker image to pulling from Docker Hub. Make the following edits to your sidekiq service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      ~/rails_project/docker-compose.yml

      . . .
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - app
            - database
            - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      

      Finally, at the bottom of the file, remove the gem_cache and node_modules volumes from the top-level volumes key. The key will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      volumes:
        db_data:
      

      Save and close the file when you are finished editing.

      For reference, your completed docker-compose.yml file should contain the following:

      ~/rails_project/docker-compose.yml

      version: '3'
      
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - database
              - redis
          ports:
              - "3000:3000"
          env_file: .env
          environment:
              RAILS_ENV: development
      
        database:
          image: postgres:12.1
          volumes:
              - db_data:/var/lib/postgresql/data
          ports:
              - "5432:5432"
          environment:
              PGDATA: /var/lib/postgresql/data/pgdata
      
        redis:
          image: redis:5.0.7
          ports:
              - "6379:6379"
      
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - app
              - database
              - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      
      volumes:
        db_data:
      

      Before translating our service definitions, we will need to write the .env file that kompose will use to create the ConfigMap with our non-sensitive information. Please see Step 2 of Containerizing a Ruby on Rails Application for Development with Docker Compose for a longer explanation of this file.

      In that tutorial, we added .env to our .gitignore file to ensure that it would not copy to version control. This means that it did not copy over when we cloned the rails-sidekiq repository in Step 2 of this tutorial. We will therefore need to recreate it now.

      Create the file:

      kompose will use this file to create a ConfigMap for our application. However, instead of assigning all of the variables from the app service definition in our Compose file, we will only add settings for the PostgreSQL and Redis. We will assign the database name, username, and password separately when we manually create a Secret object in Step 4.

      Add the following port and database name information to the .env file. Feel free to rename your database if you would like:

      ~/rails_project/.env

      DATABASE_HOST=database
      DATABASE_PORT=5432
      REDIS_HOST=redis
      REDIS_PORT=6379
      

      Save and close the file when you are finished editing.

      You are now ready to create the files with your object specs. kompose offers multiple options for translating your resources. You can:

      • Create yaml files based on the service definitions in your docker-compose.yml file with kompose convert.
      • Create Kubernetes objects directly with kompose up.
      • Create a Helm chart with kompose convert -c.

      For now, we will convert our service definitions to yaml files and then add to and revise the files that kompose creates.

      Convert your service definitions to yaml files with the following command:

      After you run this command, kompose will output information about the files it has created:

      Output

      INFO Kubernetes file "app-service.yaml" created INFO Kubernetes file "database-service.yaml" created INFO Kubernetes file "redis-service.yaml" created INFO Kubernetes file "app-deployment.yaml" created INFO Kubernetes file "env-configmap.yaml" created INFO Kubernetes file "database-deployment.yaml" created INFO Kubernetes file "db-data-persistentvolumeclaim.yaml" created INFO Kubernetes file "redis-deployment.yaml" created INFO Kubernetes file "sidekiq-deployment.yaml" created

      These include yaml files with specs for the Rails application Service, Deployment, and ConfigMap, as well as for the db-data PersistentVolumeClaim and PostgreSQL database Deployment. Also included are files for Redis and Sidekiq respectively.

      To keep these manifests out of the main directory for your Rails project, create a new directory called k8s-manifests and then use the mv command to move the generated files into it:

      • mkdir k8s-manifests
      • mv *.yaml k8s-manifests

      Finally, cd into the k8s-manifests directory. We’ll work from inside this directory from now on to keep things tidy:

      These files are a good starting point, but in order for our application’s functionality to match the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose we will need to make a few additions and changes to the files that kompose has generated.

      Step 4 — Creating Kubernetes Secrets

      In order for our application to function in the way we expect, we will need to make a few modifications to the files that kompose has created. The first of these changes will be generating a Secret for our database user and password and adding it to our application and database Deployments. Kubernetes offers two ways of working with environment variables: ConfigMaps and Secrets. kompose has already created a ConfigMap with the non-confidential information we included in our .env file, so we will now create a Secret with our confidential information: our database name, username and password.

      The first step in manually creating a Secret will be to convert the data to base64, an encoding scheme that allows you to uniformly transmit data, including binary data.

      First convert the database name to base64 encoded data:

      • echo -n 'your_database_name' | base64

      Note down the encoded value.

      Next convert your database username:

      • echo -n 'your_database_username' | base64

      Again record the value you see in the output.

      Finally, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your yaml files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define your DATABASE_NAME, DATABASE_USER and DATABASE_PASSWORD using the encoded values you just created. Be sure to replace the highlighted placeholder values here with your encoded database name, username and password:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
      

      We have named the Secret object database-secret, but you are free to name it anything you would like.

      These secrets are used with the Rails application so that it can connect to PostgreSQL. However, the database itself needs to be initialized with these same values. So next, copy the three lines and paste them at the end of the file. Edit the last three lines and change the DATABASE prefix for each variable to POSTGRES. Finally change the POSTGRES_NAME variable to read POSTGRES_DB.

      Your final secret.yaml file should contain the following:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
        POSTGRES_DB: your_database_name
        POSTGRES_PASSWORD: your_encoded_password
        POSTGRES_USER: your_encoded_username
      

      Save and close this file when you are finished editing. As you did with your .env file, be sure to add secret.yaml to your .gitignore file to keep it out of version control.

      With secret.yaml written, our next step will be to ensure that our application and database Deployments both use the values that we added to the file. Let’s start by adding references to the Secret to our application Deployment.

      Open the file called app-deployment.yaml:

      The file’s container specifications include the following environment variables defined under the env key:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: DATABASE_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_HOST
                        name: env
                  - name: DATABASE_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_PORT
                        name: env
                  - name: RAILS_ENV
                    value: development
                  - name: REDIS_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_HOST
                        name: env
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
      . . .
      

      We will need to add references to our Secret so that our application will have access to those values. Instead of including a configMapKeyRef key to point to our env ConfigMap, as is the case with the existing values, we’ll include a secretKeyRef key to point to the values in our database-secret secret.

      Add the following Secret references after the - name: REDIS_PORT variable section:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      . . .
          spec:
            containers:
              - env:
              . . .  
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
                  - name: DATABASE_NAME
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_NAME
                  - name: DATABASE_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_PASSWORD
                  - name: DATABASE_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_USER
      . . .
      
      

      Save and close the file when you are finished editing. As with your secrets.yaml file, be sure to validate your edits using kubectl to ensure there are no issues with spaces, tabs, and indentation:

      • kubectl create -f app-deployment.yaml --dry-run --validate=true

      Output

      deployment.apps/app created (dry run)

      Next, we’ll add the same values to the database-deployment.yaml file.

      Open the file for editing:

      • nano database-deployment.yaml

      In this file, we will add references to our Secret for following variable keys: POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD. The postgres image makes these variables available so that you can modify the initialization of your database instance. The POSTGRES_DB creates a default database that is available when the container starts. The POSTGRES_USER and POSTGRES_PASSWORD together create a privileged user that can access the created database.

      Using the these values means that the user we create has access to all of the administrative and operational privileges of that role in PostgreSQL. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

      Under the POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD variables, add references to the Secret values:

      ~/rails_project/k8s-manifests/database-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: PGDATA
                    value: /var/lib/postgresql/data/pgdata
                  - name: POSTGRES_DB
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_DB
                  - name: POSTGRES_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_PASSWORD        
                  - name: POSTGRES_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_USER
      . . .
      

      Save and close the file when you are finished editing. Again be sure to lint your edited file using kubectl with the --dry-run --validate=true arguments.

      With your Secret in place, you can move on to creating the database Service and ensuring that your application container only attempts to connect to the database once it is fully set up and initialized.

      Step 5 — Modifying the PersistentVolumeClaim and Exposing the Application Frontend

      Before running our application, we will make two final changes to ensure that our database storage will be provisioned properly and that we can expose our application frontend using a LoadBalancer.

      First, let’s modify the storage resource defined in the PersistentVolumeClaim that kompose created for us. This Claim allows us to dynamically provision storage to manage our application’s state.

      To work with PersistentVolumeClaims, you must have a StorageClass created and configured to provision storage resources. In our case, because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage.

      We can check this by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE do-block-storage (default) dobs.csi.digitalocean.com Delete Immediate true 76m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      When kompose created db-data-persistentvolumeclaim.yaml, it set the storage resource to a size that does not meet the minimum size requirements of our provisioner. We will therefore need to modify our PersistentVolumeClaim to use the minimum viable DigitalOcean Block Storage unit: 1GB. Please feel free to modify this to meet your storage requirements.

      Open db-data-persistentvolumeclaim.yaml:

      • nano db-data-persistentvolumeclaim.yaml

      Replace the storage value with 1Gi:

      ~/rails_project/k8s-manifests/db-data-persistentvolumeclaim.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: db-data
        name: db-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      status: {}
      

      Also note the accessMode: ReadWriteOnce means that the volume provisioned as a result of this Claim will be read-write only by a single node. Please see the documentation for more information about different access modes.

      Save and close the file when you are finished.

      Next, open app-service.yaml:

      We are going to expose this Service externally using a DigitalOcean Load Balancer. If you are not using a DigitalOcean cluster, please consult the relevant documentation from your cloud provider for information about their load balancers. Alternatively, you can follow the official Kubernetes documentation on setting up a highly available cluster with kubeadm, but in this case you will not be able to use PersistentVolumeClaims to provision storage.

      Within the Service spec, specify LoadBalancer as the Service type:

      ~/rails_project/k8s-manifests/app-service.yaml

      apiVersion: v1
      kind: Service
      . . .
      spec:
        type: LoadBalancer
        ports:
      . . .
      

      When we create the app Service, a load balancer will be automatically created, providing us with an external IP where we can access our application.

      Save and close the file when you are finished editing.

      With all of our files in place, we are ready to start and test the application.

      Note:
      If you would like to compare your edited Kubernetes manifests to a set of reference files to be certain that your changes match this tutorial, the companion Github repository contains a set of tested manifests. You can compare each file individually, or you can also switch your local git branch to use the kubernetes-workflow branch.

      If you opt to switch branches, be sure to copy your secrets.yaml file into the new checked out version since we added it to .gitignore earlier in the tutorial.

      Step 6 — Starting and Accessing the Application

      It’s time to create our Kubernetes objects and test that our application is working as expected.

      To create the objects we’ve defined, we’ll use kubectl create with the -f flag, which will allow us to specify the files that kompose created for us, along with the files we wrote. Run the following command to create the Rails application and PostgreSQL database, Redis cache, and Sidekiq Services and Deployments, along with your Secret, ConfigMap, and PersistentVolumeClaim:

      • kubectl create -f app-deployment.yaml,app-service.yaml,database-deployment.yaml,database-service.yaml,db-data-persistentvolumeclaim.yaml,env-configmap.yaml,redis-deployment.yaml,redis-service.yaml,secret.yaml,sidekiq-deployment.yaml

      You receive the following output, indicating that the objects have been created:

      Output

      deployment.apps/app created service/app created deployment.apps/database created service/database created persistentvolumeclaim/db-data created configmap/env created deployment.apps/redis created service/redis created secret/database-secret created deployment.apps/sidekiq created

      To check that your Pods are running, type:

      You don’t need to specify a Namespace here, since we have created our objects in the default Namespace. If you are working with multiple Namespaces, be sure to include the -n flag when running this kubectl create command, along with the name of your Namespace.

      You will see output similar to the following while your database container is starting (the status will be either Pending or ContainerCreating):

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 23s database-c77d55fbb-bmfm8 0/1 Pending 0 23s redis-7d65467b4d-9hcxk 1/1 Running 0 23s sidekiq-867f6c9c57-mcwks 1/1 Running 0 23s

      Once the database container is started, you will have output like this:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Now that your application is up and running, the last step that is required is to run Rails’ database migrations. This step will load a schema into the PostgreSQL database for the demo application.

      To run pending migrations you’ll exec into the running application pod and then call the rake db:migrate command.

      First, find the name of the application pod with the following command:

      Find the pod that corresponds to your application like the highlighted pod name in the following output:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      With that pod name noted down, you can now run the kubectl exec command to complete the database migration step.

      Run the migrations with this command:

      • kubectl exec your_app_pod_name -- rake db:migrate

      You should receive output similar to the following, which indicates that the database schema has been loaded:

      Output

      == 20190927142853 CreateSharks: migrating ===================================== -- create_table(:sharks) -> 0.0190s == 20190927142853 CreateSharks: migrated (0.0208s) ============================ == 20190927143639 CreatePosts: migrating ====================================== -- create_table(:posts) -> 0.0398s == 20190927143639 CreatePosts: migrated (0.0421s) ============================= == 20191120132043 CreateEndangereds: migrating ================================ -- create_table(:endangereds) -> 0.8359s == 20191120132043 CreateEndangereds: migrated (0.8367s) =======================

      With your containers running and data loaded, you can now access the application. To get the IP for the app LoadBalancer, type:

      You will receive output like the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app LoadBalancer 10.245.73.142 your_lb_ip 3000:31186/TCP 21m database ClusterIP 10.245.155.87 <none> 5432/TCP 21m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 21m redis ClusterIP 10.245.119.67 <none> 6379/TCP 21m

      The EXTERNAL_IP associated with the app service is the IP address where you can access the application. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip:3000.

      You should see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will have a page with a button to create a new shark:

      Shark Info Form

      Click it and when prompted, enter the username and password from earlier in the tutorial series. If you did not change these values then the defaults are sammy and shark respectively.

      In the form, add a shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      You now have a single instance setup of a Rails application with a PostgreSQL database running on a Kubernetes cluster. You also have a Redis cache and a Sidekiq worker to process data that users submit.

      Conclusion

      The files you have created in this tutorial are a good starting point to build from as you move toward production. As you develop your application, you can work on implementing the following:



      Source link