One place for hosting & domains

      Configure

      How to Install and Configure Laravel with LEMP on Ubuntu 18.04


      Introduction

      Laravel is an open-source PHP framework that provides a set of tools and resources to build modern PHP applications. With a complete ecosystem leveraging its built-in features, Laravel’s popularity has grown rapidly in the past few years, with many developers adopting it as their framework of choice for a streamlined development process.

      In this guide, you’ll install and configure a new Laravel application on an Ubuntu 18.04 server, using Composer to download and manage the framework dependencies. When you’re finished, you’ll have a functional Laravel demo application pulling content from a MySQL database.

      Prerequisites

      In order to complete this guide, you will first need to perform the following tasks on your Ubuntu 18.04 server:

      Step 1 — Installing Required PHP modules

      Before you can install Laravel, you need to install a few PHP modules that are required by the framework. We’ll use apt to install the php-mbstring, php-xml and php-bcmath PHP modules. These PHP extensions provide extra support for dealing with character encoding, XML and precision mathematics.

      If this is the first time using apt in this session, you should first run the update command to update the package manager cache:

      Now you can install the required packages with:

      • sudo apt install php-mbstring php-xml php-bcmath

      Your system is now ready to execute Laravel's installation via Composer, but before doing so, you'll need a database for your application.

      Step 2 — Creating a Database for the Application

      To demonstrate Laravel's basic installation and usage, we'll create a sample travel list application to show a list of places a user would like to travel to, and a list of places that they already visited. This can be stored in a simple places table with a field for locations that we'll call name and another field to mark them as visited or not visited, which we'll call visited. Additionally, we'll include an id field to uniquely identify each entry.

      To connect to the database from the Laravel application, we'll create a dedicated MySQL user, and grant this user full privileges over the travel_list database.

      To get started, log in to the MySQL console as the root database user with:

      To create a new database, run the following command from your MySQL console:

      • CREATE DATABASE travel_list;

      Now you can create a new user and grant them full privileges on the custom database you've just created. In this example, we're creating a user named travel_user with the password password, though you should change this to a secure password of your choosing:

      • GRANT ALL ON travel_list.* TO 'travel_user'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;

      This will give the travel_user user full privileges over the travel_list database, while preventing this user from creating or modifying other databases on your server.

      Following this, exit the MySQL shell:

      You can now test if the new user has the proper permissions by logging in to the MySQL console again, this time using the custom user credentials:

      Note the -p flag in this command, which will prompt you for the password used when creating the travel_user user. After logging in to the MySQL console, confirm that you have access to the travel_list database:

      This will give you the following output:

      Output

      +--------------------+ | Database | +--------------------+ | information_schema | | travel_list | +--------------------+ 2 rows in set (0.01 sec)

      Next, create a table named places in the travel_list database. From the MySQL console, run the following statement:

      • CREATE TABLE travel_list.places (
      • id INT AUTO_INCREMENT,
      • name VARCHAR(255),
      • visited BOOLEAN,
      • PRIMARY KEY(id)
      • );

      Now, populate the places table with some sample data:

      • INSERT INTO travel_list.places (name, visited)
      • VALUES ("Tokyo", false),
      • ("Budapest", true),
      • ("Nairobi", false),
      • ("Berlin", true),
      • ("Lisbon", true),
      • ("Denver", false),
      • ("Moscow", false),
      • ("Olso", false),
      • ("Rio", true),
      • ("Cincinati", false),

      To confirm that the data was successfully saved to your table, run:

      • SELECT * FROM travel_list.places;

      You will see output similar to this:

      Output

      +----+-----------+---------+ | id | name | visited | +----+-----------+---------+ | 1 | Tokyo | 0 | | 2 | Budapest | 1 | | 3 | Nairobi | 0 | | 4 | Berlin | 1 | | 5 | Lisbon | 1 | | 6 | Denver | 0 | | 7 | Moscow | 0 | | 8 | Oslo | 0 | | 9 | Rio | 1 | | 10 | Cincinati | 0 | +----+-----------+---------+ 10 rows in set (0.00 sec)

      After confirming that you have valid data in your test table, you can exit the MySQL console:

      You're now ready to create the application and configure it to connect to the new database.

      Step 3 — Creating a New Laravel Application

      You will now create a new Laravel application using the composer create-project command. This Composer command is typically used to bootstrap new applications based on existing frameworks and content management systems.

      Throughout this guide, we'll use travel_list as an example application, but you are free to change this to something else. The travel_list application will display a list of locations pulled from a local MySQL server, intended to demonstrate Laravel's basic configuration and confirm that you're able to connect to the database.

      First, go to your user's home directory:

      The following command will create a new travel_list directory containing a barebones Laravel application based on default settings:

      • composer create-project --prefer-dist laravel/laravel travel_list

      You will see output similar to this:

      Output

      Installing laravel/laravel (v5.8.17) - Installing laravel/laravel (v5.8.17): Downloading (100%) Created project in travel_list > @php -r "file_exists('.env') || copy('.env.example', '.env');" Loading composer repositories with package information Updating dependencies (including require-dev) Package operations: 80 installs, 0 updates, 0 removals - Installing symfony/polyfill-ctype (v1.11.0): Downloading (100%) - Installing phpoption/phpoption (1.5.0): Downloading (100%) - Installing vlucas/phpdotenv (v3.4.0): Downloading (100%) - Installing symfony/css-selector (v4.3.2): Downloading (100%) ...

      When the installation is finished, access the application's directory and run Laravel's artisan command to verify that all components were successfully installed:

      • cd travel_list
      • php artisan

      You'll see output similar to this:

      Output

      Laravel Framework 5.8.29 Usage: command [options] [arguments] Options: -h, --help Display this help message -q, --quiet Do not output any message -V, --version Display this application version --ansi Force ANSI output --no-ansi Disable ANSI output -n, --no-interaction Do not ask any interactive question --env[=ENV] The environment the command should run under -v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug (...)

      This output confirms that the application files are in place, and the Laravel command-line tools are working as expected. However, we still need to configure the application to set up the database and a few other details.

      Step 4 — Configuring Laravel

      The Laravel configuration files are located in a directory called config, inside the application's root directory. Additionally, when you install Laravel with Composer, it creates an environment file. This file contains settings that are specific to the current environment the application is running, and will take precedence over the values set in regular configuration files located at the config directory. Each installation on a new environment requires a tailored environment file to define things such as database connection settings, debug options, application URL, among other items that may vary depending on which environment the application is running.

      Warning: The environment configuration file contains sensitive information about your server, including database credentials and security keys. For that reason, you should never share this file publicly.

      We'll now edit the .env file to customize the configuration options for the current application environment.

      Open the .env file using your command line editor of choice. Here we'll use nano:

      Even though there are many configuration variables in this file, you don't need to set up all of them now. The following list contains an overview of the variables that require immediate attention:

      • APP_NAME: Application name, used for notifications and messages.
      • APP_ENV: Current application environment.
      • APP_KEY: Used for generating salts and hashes, this unique key is automatically created when installing Laravel via Composer, so you don't need to change it.
      • APP_DEBUG: Whether or not to show debug information at client side.
      • APP_URL: Base URL for the application, used for generating application links.
      • DB_DATABASE: Database name.
      • DB_USERNAME: Username to connect to the database.
      • DB_PASSWORD: Password to connect to the database.

      By default, these values are configured for a local development environment that uses Homestead, a prepackaged Vagrant box provided by Laravel. We'll change these values to reflect the current environment settings of our example application.

      In case you are installing Laravel in a development or testing environment, you can leave the APP_DEBUG option enabled, as this will give you important debug information while testing the application from a browser. The APP_ENV variable should be set to development or testing in this case.

      In case you are installing Laravel in a production environment, you should disable the APP_DEBUG option, because it shows to the final user sensitive information about your application. The APP_ENV in this case should be set to production.

      The following .env file sets up our example application for development:

      Note: The APP_KEY variable contains a unique key that was auto generated when you installed Laravel via Composer. You don't need to change this value. If you want to generate a new secure key, you can use the php artisan key:generate command.

      /var/www/travel_list/.env

      APP_NAME=TravelList
      APP_ENV=development
      APP_KEY=APPLICATION_UNIQUE_KEY_DONT_COPY
      APP_DEBUG=true
      APP_URL=http://domain_or_IP
      
      LOG_CHANNEL=stack
      
      DB_CONNECTION=mysql
      DB_HOST=127.0.0.1
      DB_PORT=3306
      DB_DATABASE=travel_list
      DB_USERNAME=travel_user
      DB_PASSWORD=password
      
      ...
      

      Adjust your variables accordingly. When you are done editing, save and close the file to keep your changes. If you're using nano, you can do that with CTRL+X, then Y and Enter to confirm.

      Your Laravel application is now set up, but we still need to configure the web server in order to be able to access it from a browser. In the next step, we'll configure Nginx to serve your Laravel application.

      Step 5 — Setting Up Nginx

      We have installed Laravel on a local folder of your remote user's home directory, and while this works well for local development environments, it's not a recommended practice for web servers that are open to the public internet. We'll move the application folder to /var/www, which is the usual location for web applications running on Nginx.

      First, use the mv command to move the application folder with all its contents to /var/www/travel_list:

      • sudo mv ~/travel_list /var/www/travel_list

      Now we need to give the web server user write access to the storage and cache folders, where Laravel stores application-generated files:

      • sudo chown -R www-data.www-data /var/www/travel_list/storage
      • sudo chown -R www-data.www-data /var/www/travel_list/bootstrap/cache

      The application files are now in order, but we still need to configure Nginx to serve the content. To do this, we'll create a new virtual host configuration file at /etc/nginx/sites-available:

      • sudo nano /etc/nginx/sites-available/travel_list

      The following configuration file contains the recommended settings for Laravel applications on Nginx:

      /etc/nginx/sites-available/travel_list

      server {
          listen 80;
          server_name server_domain_or_IP;
          root /var/www/travel_list/public;
      
          add_header X-Frame-Options "SAMEORIGIN";
          add_header X-XSS-Protection "1; mode=block";
          add_header X-Content-Type-Options "nosniff";
      
          index index.html index.htm index.php;
      
          charset utf-8;
      
          location / {
              try_files $uri $uri/ /index.php?$query_string;
          }
      
          location = /favicon.ico { access_log off; log_not_found off; }
          location = /robots.txt  { access_log off; log_not_found off; }
      
          error_page 404 /index.php;
      
          location ~ .php$ {
              fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
              fastcgi_index index.php;
              fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
              include fastcgi_params;
          }
      
          location ~ /.(?!well-known).* {
              deny all;
          }
      }
      

      Copy this content to your /etc/nginx/sites-available/travel_list file and, if necessary, adjust the highlighted values to align with your own configuration. Save and close the file when you're done editing.

      To activate the new virtual host configuration file, create a symbolic link to travel_list in sites-enabled:

      • sudo ln -s /etc/nginx/sites-available/travel_list /etc/nginx/sites-enabled/

      Note: If you have another virtual host file that was previously configured for the same server_name used in the travel_list virtual host, you might need to deactivate the old configuration by removing the corresponding symbolic link inside /etc/nginx/sites-enabled/.

      To confirm that the configuration doesn't contain any syntax errors, you can use:

      You should see output like this:

      Output

      • nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
      • nginx: configuration file /etc/nginx/nginx.conf test is successful

      To apply the changes, reload Nginx with:

      • sudo systemctl reload nginx

      Now go to your browser and access the application using the server's domain name or IP address, as defined by the server_name directive in your configuration file:

      http://server_domain_or_IP
      

      You will see a page like this:

      Laravel splash page

      That confirms your Nginx server is properly configured to serve Laravel. From this point, you can start building up your application on top of the skeleton provided by the default installation.

      In the next step, we'll modify the application's main route to query for data in the database using Laravel's DB facade.

      Step 6 — Customizing the Main Page

      Assuming you've followed all the steps in this guide so far, you should have a working Laravel application and a database table named places containing some sample data.

      We'll now edit the main application route to query for the database and return the contents to the application's view.

      Open the main route file, routes/web.php:

      This file comes by default with the following content:

      routes/web.php

      <?php
      
      /*
      |--------------------------------------------------------------------------
      | Web Routes
      |--------------------------------------------------------------------------
      |
      | Here is where you can register web routes for your application. These
      | routes are loaded by the RouteServiceProvider within a group which
      | contains the "web" middleware group. Now create something great!
      |
      */
      
      Route::get('/', function () {
          return view('welcome');
      });
      
      

      Routes are defined within this file using the static method Route::get, which receives a path and a callback function as arguments.

      The following code replaces the main route callback function. It makes 2 queries to the database using the visited flag to filter results. It returns the results to a view named travel_list, which we're going to create next. Copy this content to your routes/web.php file, replacing the code that is already there:

      routes/web.php

      <?php
      
      use IlluminateSupportFacadesDB;
      
      Route::get('/', function () {
        $visited = DB::select('select * from places where visited = ?', [1]); 
        $togo = DB::select('select * from places where visited = ?', [0]);
      
        return view('travel_list', ['visited' => $visited, 'togo' => $togo ] );
      });
      

      Save and close the file when you're done editing. We'll now create the view that will render the database results to the user. Create a new view file inside resources/views:

      • nano resources/views/travel_list.blade.php

      The following template creates two lists of places based on the variables visited and togo. Copy this content to your new view file:

      resources/views/travel_list/blade.php

      <html>
      <head>
          <title>Travel List</title>
      </head>
      
      <body>
          <h1>My Travel Bucket List</h1>
          <h2>Places I'd Like to Visit</h2>
          <ul>
            @foreach ($togo as $newplace)
              <li>{{ $newplace->name }}</li>
            @endforeach
          </ul>
      
          <h2>Places I've Already Been To</h2>
          <ul>
                @foreach ($visited as $place)
                      <li>{{ $place->name }}</li>
                @endforeach
          </ul>
      </body>
      </html>
      

      Save and close the file when you're done. Now go to your browser and reload the application. You'll see a page like this:

      Demo Laravel Application

      You have now a functional Laravel application pulling contents from a MySQL database.

      Conclusion

      In this tutorial, you've set up a new Laravel application on top of a LEMP stack (Linux, Nginx, MySQL and PHP), running on an Ubuntu 18.04 server. You've also customized your default route to query for database content and exhibit the results in a custom view.

      From here, you can create new routes and views for any additional pages your application needs. Check the official Laravel documentation for more information on routes, views, and database support. If you're deploying to production, you should also check the optimization section for a few different ways in which you can improve your application's performance.

      For improved security, you should consider installing an TLS/SSL certificate for your server, allowing it to serve content over HTTPS. To this end, you can follow our guide on how to secure your Nginx installation with Let's Encrypt on Ubuntu 18.04.



      Source link

      How To Configure a Galera Cluster with MariaDB on Debian 10 Servers


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Clustering adds high availability to your database by distributing changes to different servers. In the event that one of the instances fails, others are quickly available to continue serving.

      Clusters come in two general configurations, active-passive and active-active. In active-passive clusters, all writes are done on a single active server and then copied to one or more passive servers that are poised to take over only in the event of an active server failure. Some active-passive clusters also allow SELECT operations on passive nodes. In an active-active cluster, every node is read-write and a change made to one is replicated to all.

      MariaDB is an open source relational database system that is fully compatible with the popular MySQL RDBMS system. You can read the official documentation for MariaDB at this page. Galera is a database clustering solution that enables you to set up multi-master clusters using synchronous replication. Galera automatically handles keeping the data on different nodes in sync while allowing you to send read and write queries to any of the nodes in the cluster. You can learn more about Galera at the official documentation page.

      In this guide, you will configure an active-active MariaDB Galera cluster. For demonstration purposes, you will configure and test three Debian 10 servers that will act as nodes in the cluster. This is the smallest configurable cluster.

      Prerequisites

      To follow along, you will need a DigitalOcean account, in addition to the following:

      • Three Debian 10 servers with private networking enabled, each with a non-root user with sudo privileges.

      While the steps in this tutorial have been written for and tested against DigitalOcean Droplets, much of them should also be applicable to non-DigitalOcean servers with private networking enabled.

      Step 1 — Adding the MariaDB Repositories to All Servers

      In this step, you will add the relevant MariaDB package repositories to each of your three servers so that you will be able to install the right version of MariaDB used in this tutorial. Once the repositories are updated on all three servers, you will be ready to install MariaDB.

      One thing to note about MariaDB is that it originated as a drop-in replacement for MySQL, so in many configuration files and startup scripts, you’ll see mysql rather than mariadb. For consistency’s sake, we will use mysql in this guide where either could work.

      In this tutorial, you will use MariaDB version 10.4. Since this version isn’t included in the default Debian repositories, you’ll start by adding the external Debian repository maintained by the MariaDB project to all three of your servers.

      To add the repository, you will first need to install the dirmngr and software-properties-common packages. dirmngr is a server for managing repository certificates and keys. software-properties-common is a package that allows easy addition and updates of source repository locations. Install the two packages by running:

      • sudo apt install dirmngr software-properties-common

      Note: MariaDB is a well-respected provider, but not all external repositories are reliable. Be sure to install only from trusted sources.

      You’ll add the MariaDB repository key with the apt-key command, which the APT package manager will use to verify that the package is authentic:

      • sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8

      Once you have the trusted key in the database, you can add the repository with the following command:

      • sudo add-apt-repository 'deb [arch=amd64] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.4/debian buster main'

      After adding the repository, run apt update in order to include package manifests from the new repository:

      Once you have completed this step on your first server, repeat for your second and third servers.

      Now that you have successfully added the package repository on all three of your servers, you're ready to install MariaDB in the next section.

      Step 2 — Installing MariaDB on All Servers

      In this step, you will install the actual MariaDB packages on your three servers.

      Beginning with version 10.1, the MariaDB Server and MariaDB Galera Server packages are combined, so installing mariadb-server will automatically install Galera and several dependencies:

      • sudo apt install mariadb-server

      You will be asked to confirm whether you would like to proceed with the installation. Enter yes to continue with the installation.

      From MariaDB version 10.4 onwards, the root MariaDB user does not have a password by default. To set a password for the root user, start by logging into MariaDB:

      Once you're inside the MariaDB shell, change the password by executing the following statement:

      • set password = password("your_password");

      You will see the following output indicating that the password was set correctly:

      Output

      Query OK, 0 rows affected (0.001 sec)

      Exit the MariaDB shell by running the following command:

      If you would like to learn more about SQL or need a quick refresher, check out our MySQL tutorial.

      You now have all of the pieces necessary to begin configuring the cluster, but since you'll be relying on rsync in later steps, make sure it's installed:

      This will confirm that the newest version of rsync is already available or prompt you to upgrade or install it.

      Once you have installed MariaDB and set the root password on your first server, repeat these steps for your other two servers.

      Now that you have installed MariaDB successfully on each of the three servers, you can proceed to the configuration step in the next section.

      Step 3 — Configuring the First Node

      In this step you will configure your first node. Each node in the cluster needs to have a nearly identical configuration. Because of this, you will do all of the configuration on your first machine, and then copy it to the other nodes.

      By default, MariaDB is configured to check the /etc/mysql/conf.d directory to get additional configuration settings from files ending in .cnf. Create a file in this directory with all of your cluster-specific directives:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Add the following configuration into the file. The configuration specifies different cluster options, details about the current server and the other servers in the cluster, and replication-related settings. Note that the IP addresses in the configuration are the private addresses of your respective servers; replace the highlighted lines with the appropriate IP addresses.

      /etc/mysql/conf.d/galera.cnf

      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_on=ON
      wsrep_provider=/usr/lib/galera/libgalera_smm.so
      
      # Galera Cluster Configuration
      wsrep_cluster_name="test_cluster"
      wsrep_cluster_address="gcomm://First_Node_IP,Second_Node_IP,Third_Node_IP"
      
      # Galera Synchronization Configuration
      wsrep_sst_method=rsync
      
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      
      • The first section modifies or re-asserts MariaDB/MySQL settings that will allow the cluster to function correctly. For example, Galera won’t work with MyISAM or similar non-transactional storage engines, and mysqld must not be bound to the IP address for localhost. You can learn about the settings in more detail on the Galera Cluster system configuration page.
      • The "Galera Provider Configuration" section configures the MariaDB components that provide a WriteSet replication API. This means Galera in your case, since Galera is a wsrep (WriteSet Replication) provider. You specify the general parameters to configure the initial replication environment. This doesn't require any customization, but you can learn more about Galera configuration options.
      • The "Galera Cluster Configuration" section defines the cluster, identifying the cluster members by IP address or resolvable domain name and creating a name for the cluster to ensure that members join the correct group. You can change the wsrep_cluster_name to something more meaningful than test_cluster or leave it as-is, but you must update wsrep_cluster_address with the private IP addresses of your three servers.
      • The "Galera Synchronization Configuration" section defines how the cluster will communicate and synchronize data between members. This is used only for the state transfer that happens when a node comes online. For your initial setup, you are using rsync, because it's commonly available and does what you'll need for now.
      • The "Galera Node Configuration" section clarifies the IP address and the name of the current server. This is helpful when trying to diagnose problems in logs and for referencing each server in multiple ways. The wsrep_node_address must match the address of the machine you're on, but you can choose any name you want in order to help you identify the node in log files.

      When you are satisfied with your cluster configuration file, copy the contents into your clipboard, save and close the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Now that you have configured your first node successfully, you can move on to configuring the remaining nodes in the next section.

      Step 4 — Configuring the Remaining Nodes

      In this step, you will configure the remaining two nodes. On your second node, open the configuration file:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Paste in the configuration you copied from the first node, then update the Galera Node Configuration to use the IP address or resolvable domain name for the specific node you're setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:

      /etc/mysql/conf.d/galera.cnf

      . . .
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      . . .
      

      Save and exit the file.

      Once you have completed these steps, repeat them on the third node.

      You're almost ready to bring up the cluster, but before you do, make sure that the appropriate ports are open in your firewall.

      Step 5 — Opening the Firewall on Every Server

      In this step, you will configure your firewall so that the ports required for inter-node communication are open. On every server, check the status of the firewall by running:

      In this case, only SSH is allowed through:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

      Since only SSH traffic is permitted in this case, you’ll need to add rules for MySQL and Galera traffic. If you tried to start the cluster, it would fail because of firewall rules.

      Galera can make use of four ports:

      • 3306 For MySQL client connections and State Snapshot Transfer that use the mysqldump method.
      • 4567 For Galera Cluster replication traffic. Multicast replication uses both UDP transport and TCP on this port.
      • 4568 For Incremental State Transfer.
      • 4444 For all other State Snapshot Transfer.

      In this example, you’ll open all four ports while you do your setup. Once you've confirmed that replication is working, you'd want to close any ports you're not actually using and restrict traffic to just servers in the cluster.

      Open the ports with the following command:

      • sudo ufw allow 3306,4567,4568,4444/tcp
      • sudo ufw allow 4567/udp

      Note: Depending on what else is running on your servers you might want to restrict access right away. The UFW Essentials: Common Firewall Rules and Commands guide can help with this.

      After you have configured your firewall on the first node, create the same firewall settings on the second and third node.

      Now that you have configured the firewalls successfully, you're ready to start the cluster in the next step.

      Step 6 — Starting the Cluster

      In this step, you will start your MariaDB cluster. To begin, you need to stop the running MariaDB service so that you can bring your cluster online.

      Stop MariaDB on All Three Servers

      Use the following command on all three servers to stop MariaDB so that you can bring them back up in a cluster:

      • sudo systemctl stop mysql

      systemctl doesn't display the outcome of all service management commands, so to be sure you succeeded, use the following command:

      • sudo systemctl status mysql

      If the last line looks something like the following, the command was successful:

      Output

      . . . Apr 26 03:34:23 galera-node-01 systemd[1]: Stopped MariaDB 10.4.4 database server.

      Once you've shut down mysql on all of the servers, you're ready to proceed.

      Bring Up the First Node

      To bring up the first node, you'll need to use a special startup script. The way you've configured your cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf file to get its initial state. Without using the galera_new_cluster script that allows systemd to pass the --wsrep-new-cluster parameter, a normal systemctl start mysql would fail because there are no nodes running for the first node to connect with.

      This command will not display any output on successful execution. When this script succeeds, the node is registered as part of the cluster, and you can see it with the following command:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that there is one node in the cluster:

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+

      On the remaining nodes, you can start mysql normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.

      Bring Up the Second Node

      Now you can bring up the second node. Start mysql:

      • sudo systemctl start mysql

      No output will be displayed on successful execution. You will see your cluster size increase as each node comes online:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that the second node has joined the cluster and that there are two nodes in total.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 2 | +--------------------+-------+

      Bring Up the Third Node

      It's now time to bring up the third node. Start mysql:

      • sudo systemctl start mysql

      Run the following command to find the cluster size:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output, which indicates that the third node has joined the cluster and that the total number of nodes in the cluster is three.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+

      At this point, the entire cluster is online and communicating successfully. Now, you can ensure the working setup by testing replication in the next section.

      Step 7 — Testing Replication

      You've gone through the steps up to this point so that your cluster can perform replication from any node to any other node, known as active-active replication. Follow the steps below to test and see if the replication is working as expected.

      Write to the First Node

      You'll start by making database changes on your first node. The following commands will create a database called playground and a table inside of this database called equipment.

      • mysql -u root -p -e 'CREATE DATABASE playground;
      • CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
      • INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");'

      In the previous command, the CREATE DATABASE statement creates a database named playground. The CREATE statement creates a table named equipment inside the playground database having an auto-incrementing identifier column called id and other columns. The type column, quant column, and color column are defined to store the type, quantity, and color of the equipment respectively. The INSERT statement inserts an entry of type slide, quantity 2, and color blue.

      You now have one value in your table.

      Read and Write on the Second Node

      Next, look at the second node to verify that replication is working:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      If replication is working, the data you entered on the first node will be visible here on the second:

      Output

      +----+-------+-------+-------+ | id | type | quant | color | +----+-------+-------+-------+ | 1 | slide | 2 | blue | +----+-------+-------+-------+

      From this same node, you can write data to the cluster:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'

      Read and Write on the Third Node

      From the third node, you can read all of this data by querying the table again:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output showing the two rows:

      Output

      +----+-------+-------+--------+ | id | type | quant | color | +----+-------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | +----+-------+-------+--------+

      Again, you can add another value from this node:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");'

      Read on the First Node:

      Back on the first node, you can verify that your data is available everywhere:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output which indicates that the rows are available on the first node.

      Output

      +----+--------+-------+--------+ | id | type | quant | color | +----+--------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | | 3 | seesaw | 3 | green | +----+--------+-------+--------+

      You've successfully verified that you can write to all of the nodes and that replication is being performed properly.

      Conclusion

      At this point, you have a working three-node Galera test cluster configured. If you plan on using a Galera cluster in a production situation, it’s recommended that you begin with no fewer than five nodes.

      Before production use, you may want to take a look at some of the other state snapshot transfer (sst) agents like xtrabackup, which allows you to set up new nodes very quickly and without large interruptions to your active nodes. This does not affect the actual replication, but is a concern when nodes are being initialized.



      Source link

      How To Configure a Galera Cluster with MariaDB on Debian 9 Servers


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Clustering adds high availability to your database by distributing changes to different servers. In the event that one of the instances fails, others are quickly available to continue serving.

      Clusters come in two general configurations, active-passive and active-active. In active-passive clusters, all writes are done on a single active server and then copied to one or more passive servers that are poised to take over only in the event of an active server failure. Some active-passive clusters also allow SELECT operations on passive nodes. In an active-active cluster, every node is read-write and a change made to one is replicated to all.

      MariaDB is an open source relational database system that is fully compatible with the popular MySQL RDBMS system. You can read the official documentation for MariaDB at this page. Galera is a database clustering solution that enables you to set up multi-master clusters using synchronous replication. Galera automatically handles keeping the data on different nodes in sync while allowing you to send read and write queries to any of the nodes in the cluster. You can learn more about Galera at the official documentation page.

      In this guide, you will configure an active-active MariaDB Galera cluster. For demonstration purposes, you will configure and test three Debian 9 Droplets that will act as nodes in the cluster. This is the smallest configurable cluster.

      Prerequisites

      To follow along, you will need a DigitalOcean account, in addition to the following:

      • Three Debian 9 Droplets with private networking enabled, each with a non-root user with sudo privileges.

      While the steps in this tutorial have been written for and tested against DigitalOcean Droplets, much of them should also be applicable to non-DigitalOcean servers with private networking enabled.

      Step 1 — Adding the MariaDB Repositories to All Servers

      In this step, you will add the relevant MariaDB package repositories to each of your three servers so that you will be able to install the right version of MariaDB used in this tutorial. Once the repositories are updated on all three servers, you will be ready to install MariaDB.

      One thing to note about MariaDB is that it originated as a drop-in replacement for MySQL, so in many configuration files and startup scripts, you’ll see mysql rather than mariadb. For consistency’s sake, we will use mysql in this guide where either could work.

      In this tutorial, you will use MariaDB version 10.4. Since this version isn’t included in the default Debian repositories, you’ll start by adding the external Debian repository maintained by the MariaDB project to all three of your servers.

      To add the repository, you will first need to install the dirmngr and software-properties-common packages. dirmngr is a server for managing repository certificates and keys. software-properties-common is a package that allows easy addition and updates of source repository locations. Install the two packages by running:

      • sudo apt install dirmngr software-properties-common

      Note: MariaDB is a well-respected provider, but not all external repositories are reliable. Be sure to install only from trusted sources.

      You’ll add the MariaDB repository key with the apt-key command, which the APT package manager will use to verify that the package is authentic:

      • sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8

      Once you have the trusted key in the database, you can add the repository with the following command:

      • sudo add-apt-repository 'deb [arch=amd64] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.4/debian stretch main'

      After adding the repository, run apt update in order to include package manifests from the new repository:

      Once you have completed this step on your first server, repeat for your second and third servers.

      Now that you have successfully added the package repository on all three of your servers, you're ready to install MariaDB in the next section.

      Step 2 — Installing MariaDB on All Servers

      In this step, you will install the actual MariaDB packages on your three servers.

      Beginning with version 10.1, the MariaDB Server and MariaDB Galera Server packages are combined, so installing mariadb-server will automatically install Galera and several dependencies:

      • sudo apt install mariadb-server

      You will be asked to confirm whether you would like to proceed with the installation. Enter yes to continue with the installation.

      From MariaDB version 10.4 onwards, the root MariaDB user does not have a password by default. To set a password for the root user, start by logging into MariaDB:

      Once you're inside the MariaDB shell, change the password by executing the following statement:

      • set password = password("your_password");

      You will see the following output indicating that the password was set correctly:

      Output

      Query OK, 0 rows affected (0.001 sec)

      Exit the MariaDB shell by running the following command:

      If you would like to learn more about SQL or need a quick refresher, check out our MySQL tutorial.

      You now have all of the pieces necessary to begin configuring the cluster, but since you'll be relying on rsync in later steps, make sure it's installed:

      This will confirm that the newest version of rsync is already available or prompt you to upgrade or install it.

      Once you have installed MariaDB and set the root password on your first server, repeat these steps for your other two servers.

      Now that you have installed MariaDB successfully on each of the three servers, you can proceed to the configuration step in the next section.

      Step 3 — Configuring the First Node

      In this step you will configure your first node. Each node in the cluster needs to have a nearly identical configuration. Because of this, you will do all of the configuration on your first machine, and then copy it to the other nodes.

      By default, MariaDB is configured to check the /etc/mysql/conf.d directory to get additional configuration settings from files ending in .cnf. Create a file in this directory with all of your cluster-specific directives:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Add the following configuration into the file. The configuration specifies different cluster options, details about the current server and the other servers in the cluster, and replication-related settings. Note that the IP addresses in the configuration are the private addresses of your respective servers; replace the highlighted lines with the appropriate IP addresses.

      /etc/mysql/conf.d/galera.cnf

      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_on=ON
      wsrep_provider=/usr/lib/galera/libgalera_smm.so
      
      # Galera Cluster Configuration
      wsrep_cluster_name="test_cluster"
      wsrep_cluster_address="gcomm://First_Node_IP,Second_Node_IP,Third_Node_IP"
      
      # Galera Synchronization Configuration
      wsrep_sst_method=rsync
      
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      
      • The first section modifies or re-asserts MariaDB/MySQL settings that will allow the cluster to function correctly. For example, Galera won’t work with MyISAM or similar non-transactional storage engines, and mysqld must not be bound to the IP address for localhost. You can learn about the settings in more detail on the Galera Cluster system configuration page.
      • The "Galera Provider Configuration" section configures the MariaDB components that provide a WriteSet replication API. This means Galera in your case, since Galera is a wsrep (WriteSet Replication) provider. You specify the general parameters to configure the initial replication environment. This doesn't require any customization, but you can learn more about Galera configuration options.
      • The "Galera Cluster Configuration" section defines the cluster, identifying the cluster members by IP address or resolvable domain name and creating a name for the cluster to ensure that members join the correct group. You can change the wsrep_cluster_name to something more meaningful than test_cluster or leave it as-is, but you must update wsrep_cluster_address with the private IP addresses of your three servers.
      • The "Galera Synchronization Configuration" section defines how the cluster will communicate and synchronize data between members. This is used only for the state transfer that happens when a node comes online. For your initial setup, you are using rsync, because it's commonly available and does what you'll need for now.
      • The "Galera Node Configuration" section clarifies the IP address and the name of the current server. This is helpful when trying to diagnose problems in logs and for referencing each server in multiple ways. The wsrep_node_address must match the address of the machine you're on, but you can choose any name you want in order to help you identify the node in log files.

      When you are satisfied with your cluster configuration file, copy the contents into your clipboard, save and close the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Now that you have configured your first node successfully, you can move on to configuring the remaining nodes in the next section.

      Step 4 — Configuring the Remaining Nodes

      In this step, you will configure the remaining two nodes. On your second node, open the configuration file:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Paste in the configuration you copied from the first node, then update the Galera Node Configuration to use the IP address or resolvable domain name for the specific node you're setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:

      /etc/mysql/conf.d/galera.cnf

      . . .
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      . . .
      

      Save and exit the file.

      Once you have completed these steps, repeat them on the third node.

      You're almost ready to bring up the cluster, but before you do, make sure that the appropriate ports are open in your firewall.

      Step 5 — Opening the Firewall on Every Server

      In this step, you will configure your firewall so that the ports required for inter-node communication are open. On every server, check the status of the firewall by running:

      In this case, only SSH is allowed through:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

      Since only SSH traffic is permitted in this case, you’ll need to add rules for MySQL and Galera traffic. If you tried to start the cluster, it would fail because of firewall rules.

      Galera can make use of four ports:

      • 3306 For MySQL client connections and State Snapshot Transfer that use the mysqldump method.
      • 4567 For Galera Cluster replication traffic. Multicast replication uses both UDP transport and TCP on this port.
      • 4568 For Incremental State Transfer.
      • 4444 For all other State Snapshot Transfer.

      In this example, you’ll open all four ports while you do your setup. Once you've confirmed that replication is working, you'd want to close any ports you're not actually using and restrict traffic to just servers in the cluster.

      Open the ports with the following command:

      • sudo ufw allow 3306,4567,4568,4444/tcp
      • sudo ufw allow 4567/udp

      Note: Depending on what else is running on your servers you might want to restrict access right away. The UFW Essentials: Common Firewall Rules and Commands guide can help with this.

      After you have configured your firewall on the first node, create the same firewall settings on the second and third node.

      Now that you have configured the firewalls successfully, you're ready to start the cluster in the next step.

      Step 6 — Starting the Cluster

      In this step, you will start your MariaDB cluster. To begin, you need to stop the running MariaDB service so that you can bring your cluster online.

      Stop MariaDB on All Three Servers

      Use the following command on all three servers to stop MariaDB so that you can bring them back up in a cluster:

      • sudo systemctl stop mysql

      systemctl doesn't display the outcome of all service management commands, so to be sure you succeeded, use the following command:

      • sudo systemctl status mysql

      If the last line looks something like the following, the command was successful:

      Output

      . . . Apr 26 03:34:23 galera-node-01 systemd[1]: Stopped MariaDB 10.4.4 database server.

      Once you've shut down mysql on all of the servers, you're ready to proceed.

      Bring Up the First Node

      To bring up the first node, you'll need to use a special startup script. The way you've configured your cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf file to get its initial state. Without using the galera_new_cluster script that allows systemd to pass the --wsrep-new-cluster parameter, a normal systemctl start mysql would fail because there are no nodes running for the first node to connect with.

      This command will not display any output on successful execution. When this script succeeds, the node is registered as part of the cluster, and you can see it with the following command:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that there is one node in the cluster:

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+

      On the remaining nodes, you can start mysql normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.

      Bring Up the Second Node

      Now you can bring up the second node. Start mysql:

      • sudo systemctl start mysql

      No output will be displayed on successful execution. You will see your cluster size increase as each node comes online:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that the second node has joined the cluster and that there are two nodes in total.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 2 | +--------------------+-------+

      Bring Up the Third Node

      It's now time to bring up the third node. Start mysql:

      • sudo systemctl start mysql

      Run the following command to find the cluster size:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output, which indicates that the third node has joined the cluster and that the total number nodes in the cluster is three.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+

      At this point, the entire cluster is online and communicating successfully. Next, you can ensure the working setup by testing replication in the next section.

      Step 7 — Testing Replication

      You've gone through the steps up to this point so that your cluster can perform replication from any node to any other node, known as active-active replication. Follow the steps below to test and see if the replication is working as expected.

      Write to the First Node

      You'll start by making database changes on your first node. The following commands will create a database called playground and a table inside of this database called equipment.

      • mysql -u root -p -e 'CREATE DATABASE playground;
      • CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
      • INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");'

      In the previous command, the CREATE DATABASE statement creates a database named playground. The CREATE statement creates a table named equipment inside the playground database having an auto-incrementing identifier column called id and other columns. The type column, quant column, and color column are defined to store the type, quantity, and color of the equipment respectively. The INSERT statement inserts an entry of type slide, quantity 2, and color blue.

      You now have one value in your table.

      Read and Write on the Second Node

      Next, look at the second node to verify that replication is working:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      If replication is working, the data you entered on the first node will be visible here on the second:

      Output

      +----+-------+-------+-------+ | id | type | quant | color | +----+-------+-------+-------+ | 1 | slide | 2 | blue | +----+-------+-------+-------+

      From this same node, you can write data to the cluster:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'

      Read and Write on the Third Node

      From the third node, you can read all of this data by querying the table again:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output showing the two rows:

      Output

      +----+-------+-------+--------+ | id | type | quant | color | +----+-------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | +----+-------+-------+--------+

      Again, you can add another value from this node:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");'

      Read on the First Node:

      Back on the first node, you can verify that your data is available everywhere:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output that indicates the rows are available on the first node.

      Output

      +----+--------+-------+--------+ | id | type | quant | color | +----+--------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | | 3 | seesaw | 3 | green | +----+--------+-------+--------+

      You've successfully verified that you can write to all of the nodes and that replication is being performed properly.

      Conclusion

      At this point, you have a working three-node Galera test cluster configured. If you plan on using a Galera cluster in a production situation, it’s recommended that you begin with no fewer than five nodes.

      Before production use, you may want to take a look at some of the other state snapshot transfer (sst) agents like xtrabackup, which allows you to set up new nodes very quickly and without large interruptions to your active nodes. This does not affect the actual replication, but is a concern when nodes are being initialized.



      Source link