One place for hosting & domains

      Configure

      How To Configure Apache HTTP with MPM Event and PHP-FPM on FreeBSD 12.0


      The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      The Apache HTTP web server has evolved through the years to work in different environments and solve different needs. One important problem Apache HTTP has to solve, like any web server, is how to handle different processes to serve an http protocol request. This involves opening a socket, processing the request, keeping the connection open for a certain period, handling new events occurring through that connection, and returning the content produced by a program made in a particular language (such as PHP, Perl, or Python). These tasks are performed and controlled by a Multi-Processing Module (MPM).

      Apache HTTP comes with three different MPM:

      • Pre-fork: A new process is created for each incoming connection reaching the server. Each process is isolated from the others, so no memory is shared between them, even if they are performing identical calls at some point in their execution. This is a safe way to run applications linked to libraries that do not support threading—typically older applications or libraries.
      • Worker: A parent process is responsible for launching a pool of child processes, some of which are listening for new incoming connections, and others are serving the requested content. Each process is threaded (a single thread can handle one connection) so one process can handle several requests concurrently. This method of treating connections encourages better resource utilization, while still maintaining stability. This is a result of the pool of available processes, which often has free available threads ready to immediately serve new connections.
      • Event: Based on worker, this MPM goes one step further by optimizing how the parent process schedules tasks to the child processes and the threads associated to those. A connection stays open for 5 seconds by default and closes if no new event happens; this is the keep-alive directive default value, which retains the thread associated to it. The Event MPM enables the process to manage threads so that some threads are free to handle new incoming connections while others are kept bound to the live connections. Allowing re-distribution of assigned tasks to threads will make for better resource utilization and performance.

      The MPM Event module is a fast multi-processing module available on the Apache HTTP web server.

      PHP-FPM is the FastCGI Process Manager for PHP. The FastCGI protocol is based on the Common Gateway Interface (CGI), a protocol that sits between applications and web servers like Apache HTTP. This allows developers to write applications separately from the behavior of web servers. Programs run their processes independently and pass their product to the web server through this protocol. Each new connection in need of processing by an application will create a new process.

      By combining the MPM Event in Apache HTTP with the PHP FastCGI Process Manager (PHP-FPM) a website can load faster and handle more concurrent connections while using fewer resources.

      In this tutorial you will improve the performance of the FAMP stack by changing the default multi-processing module from pre-fork to event and by using the PHP-FPM process manager to handle PHP code instead of the classic mod_php in Apache HTTP.

      Prerequisites

      Before you begin this guide you’ll need the following:

      • A FreeBSD 12.0 server set up following this guide.
      • The FAMP stack installed on your server following this tutorial.
      • Access to a user with root privileges (or allowed by using sudo) in order to make configuration changes.

      Step 1 — Changing the Multi-Processing Module

      You’ll begin by looking for the pre-fork directive in the httpd.conf file. This is the main configuration file for Apache HTTP in which you can enable and disable modules. You can edit and set directives such as the listening port where Apache HTTP will serve content or the location of the content to display in this file.

      To make these changes, you’ll use the nl, number line, program, with the -ba flag to count and number lines so that nothing is mismatched at a later stage. Combined with grep this command will first count all the lines in the file specified in the path, and once finished, it will search for the string of characters you’re looking for.

      Run the following command so that the nl program will process and number the lines in httpd.conf. Then, grep will process the output by searching for the given string of characters 'mod_mpm_prefork':

      • nl -ba /usr/local/etc/apache24/httpd.conf | grep 'mod_mpm_prefork'

      As output you’ll see something similar to:

      Output

      67 LoadModule mpm_prefork_module libexec/apache24/mod_mpm_prefork.so

      Let’s edit line 67 with your text editor. In this tutorial, you’ll use vi, which is the default editor on FreeBSD:

      • sudo vi +67 /usr/local/etc/apache24/httpd.conf

      Append a # symbol at the beginning of the line so this line is commented out, like so:

      /usr/local/etc/apache24/httpd.conf

      ...
      # LoadModule mpm_prefork_module libexec/apache24/mod_mpm_prefork.so
      ...
      

      By appending the # symbol you’ve disabled the pre-fork MPM module.

      Now you’ll find the event directive in the same httpd.conf file.

      • nl -ba /usr/local/etc/apache24/httpd.conf | grep mpm_event

      You’ll see output similar to the following:

      Output

      ... 66 #LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so ...

      Now you’ll remove the # symbol in line 66 to enable the Event MPM:

      • sudo vi +66 /usr/local/etc/apache24/httpd.conf

      The directive will now read as follows:

      /usr/local/etc/apache24/httpd.conf

      ...
      LoadModule mpm_event_module libexec/apache24/mod_mpm_event.so
      ...
      

      Now that you’ve switched the configuration from the MPM pre-fork to event, you can remove the mod_php73 package connecting the PHP processor to Apache HTTP, since it is no longer necessary and will interfere if it remains on the system:

      • sudo pkg remove -y mod_php73

      Make sure the configuration is correct by running the following command to test:

      • sudo apachectl configtest

      If you see Syntax OK in your output, you can restart the Apache HTTP server:

      Note: If there are other running HTTP connections on your server a graceful restart is recommended instead of a regular restart. This will ensure that users are not pushed out, losing their connection:

      You've switched the MPM from pre-fork to event and removed the mod_php73 module connection PHP to Apache HTTP. In the next step you'll install the PHP-FPM module and configure Apache HTTP so that it can communicate with PHP more quickly.

      Step 2 — Configuring Apache HTTP to Use the FastCGI Process Manager

      FreeBSD has several supported versions of PHP that you can install via the package manager. On FreeBSD different binaries of the various available versions are compiled instead of using just one like most GNU/Linux distributions offer in their default repositories. To follow best practice you'll use the supported version, which you can check on at PHP's supported versions page.

      In this step you'll add PHP-FPM as a running service to start at boot. You'll also configure Apache HTTP to work with PHP by adding a dedicated configuration for the module as well as enabling some further modules in httpd.conf.

      First you'll append 'php_fpm_enable=YES' to the /etc/rc.conf file so the PHP-FPM service can start. You'll do that by using the sysrc command:

      • sudo sysrc php_fpm_enable="YES"

      Now you'll add the php-fpm module into the Apache module's directory, so it is configured to be used by Apache HTTP. Create the following file to do so:

      • sudo vi /usr/local/etc/apache24/modules.d/030_php-fpm.conf

      Add the following into 030_php-fpm.conf:

      /usr/local/etc/apache24/modules.d/030_php-fpm.conf

      <IfModule proxy_fcgi_module>
          <IfModule dir_module>
              DirectoryIndex index.php
          </IfModule>
          <FilesMatch ".(php|phtml|inc)$">
              SetHandler "proxy:fcgi://127.0.0.1:9000"
          </FilesMatch>
      </IfModule>
      

      This states that if the module 'proxy_fcgi' is enabled as well as the 'dir_module' then any processed files matching the extensions in parentheses should be handled by the FastCGI process manager running on the local machine through port 9000—as if the local machine were a proxy server. This is where the PHP-FPM module and Apache HTTP interconnect. To achieve this, you'll activate further modules during this step.

      To enable the proxy module, you'll first search for it in the httpd.conf file:

      • nl -ba /usr/local/etc/apache24/httpd.conf | grep mod_proxy.so

      You'll see output similar to the following:

      Output

      ... 129 #LoadModule proxy_module libexec/apache24/mod_proxy.so ...

      You'll uncomment the line by removing the # symbol:

      • sudo vi +129 /usr/local/etc/apache24/httpd.conf

      The line will look as follows once edited:

      /usr/local/etc/apache24/httpd.conf

      ...
      LoadModule proxy_module libexec/apache24/mod_proxy.so
      ...
      

      Now you can activate the FastCGI module. Find the module with the following command:

      • nl -ba /usr/local/etc/apache24/httpd.conf | grep mod_proxy_fcgi.so

      You'll see something similar to the following:

      Output

      ... 133 #LoadModule proxy_fcgi_module libexec/apache24/mod_proxy_fcgi.so ...

      Now uncomment the line 133 as you've already done with the other modules:

      • sudo vi +133 /usr/local/etc/apache24/httpd.conf

      You'll leave the line as follows:

      /usr/local/etc/apache24/httpd.conf

      ...
      LoadModule proxy_fcgi_module libexec/apache24/mod_proxy_fcgi.so
      ...
      

      Once this is done you'll start the PHP-FPM service:

      • sudo service php-fpm start

      And you'll restart Apache so it loads the latest configuration changes incorporating the PHP module:

      You've installed the PHP-FPM module, configured Apache HTTP to work with it, enabled the necessary modules for the FastCGI protocol to work, and started the corresponding services.

      Now that Apache has the Event MPM module enabled and PHP-FPM is present and running, it is time to check everything is working as intended.

      Step 3 — Checking Your Configuration

      In order to check that the configuration changes have been applied you'll run some tests. The first one will check what multi-processing module Apache HTTP is using. The second will verify that PHP is using the FPM manager.

      Check the Apache HTTP server by running the following command:

      • sudo apachectl -M | grep 'mpm'

      Your output will be as follows:

      Output

      mpm_event_module (shared)

      You can repeat the same for the proxy module and FastCGI:

      • sudo apachectl -M | grep 'proxy'

      The output will show:

      Output

      proxy_module (shared) proxy_fcgi_module (shared)

      If you would like to see the entire list of the modules, you can remove the the second part of the command after -M.

      It is now time to check if PHP is using the FastCGI Process Manager. To do so you'll write a very small PHP script that will show you all the information related to PHP.

      Run the following command to write a file named as follows:

      • sudo vi /usr/local/www/apache24/data/info.php

      Add the following content into the info.php file:

      info.php

      <?php phpinfo(); ?>
      

      Now visit your server's URL and append info.php at the end like so: http://your_server_IP_address/info.php.

      The Server API entry will be FPM/FastCGI.

      PHP Screen the Server API entry FPM/FastCGI

      Remember to delete the info.php file after this check so no information about the server is publicly disclosed.

      • sudo rm /usr/local/www/apache24/data/info.php

      You've checked the working status of the MPM module, the modules handling the FastCGI, and the handling of PHP code.

      Conclusion

      You've optimized your original FAMP stack, so the number of connections to create new Apache HTTP processes has increased, PHP-FPM will handle PHP code more efficiently, and overall resource utilization has improved.

      See the Apache HTTP server project documentation for more information on the different modules and related projects.



      Source link

      How To Configure a Galera Cluster with MariaDB on CentOS 7 Servers


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Clustering adds high availability to your database by distributing changes to different servers. In the event that one of the instances fails, others are quickly available to continue serving.

      Clusters come in two general configurations, active-passive and active-active. In active-passive clusters, all writes are done on a single active server and then copied to one or more passive servers that are poised to take over only in the event of an active server failure. Some active-passive clusters also allow SELECT operations on passive nodes. In an active-active cluster, every node is read-write and a change made to one is replicated to all.

      MariaDB is an open source relational database system that is fully compatible with the popular MySQL RDBMS system. You can read the official documentation for MariaDB at this page. Galera is a database clustering solution that enables you to set up multi-master clusters using synchronous replication. Galera automatically handles keeping the data on different nodes in sync while allowing you to send read and write queries to any of the nodes in the cluster. You can learn more about Galera at the official documentation page.

      In this guide, you will configure an active-active MariaDB Galera cluster. For demonstration purposes, you will configure and test three CentOS 7 Droplets that will act as nodes in the cluster. This is the smallest configurable cluster.

      Prerequisites

      To follow along, you will need a DigitalOcean account, in addition to the following:

      • Three CentOS 7 Droplets with private networking enabled, each with a non-root user with sudo privileges and a firewall enabled.

      While the steps in this tutorial have been written for and tested against DigitalOcean Droplets, many of them should also be applicable to non-DigitalOcean servers with private networking enabled.

      Step 1 — Adding the MariaDB Repositories to All Servers

      In this step, you will add the relevant MariaDB package repositories to each of your three servers so that you will be able to install the right version of MariaDB used in this tutorial. Once the repositories are updated on all three servers, you will be ready to install MariaDB.

      One thing to note about MariaDB is that it originated as a drop-in replacement for MySQL, so in many configuration files and startup scripts, you’ll see mysql rather than mariadb. In many cases, these are interchangeable. For consistency’s sake, we will use mariadb in this guide where either could work.

      In this tutorial, you will use MariaDB version 10.4. Since this version isn’t included in the default CentOS repositories, you’ll start by adding the external CentOS repository maintained by the MariaDB project to all three of your servers.

      Note: MariaDB is a well-respected provider, but not all external repositories are reliable. Be sure to install only from trusted sources.

      First, you’ll add the MariaDB repository key by creating a repository file with a text editor. This tutorial will use vi:

      • sudo vi /etc/yum.repos.d/mariadb.repo

      Next, add the following contents to the file by pressing i to enter insert mode, then adding the following:

      /etc/yum.repos.d/mariadb.repo

      [mariadb]
      name = MariaDB
      baseurl = http://yum.mariadb.org/10.4/centos7-amd64
      gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
      gpgcheck=1
      

      Press the esc key to return to normal mode, then type :wq to save and exit the file. If you would like to learn more about the text editor vi and its predecessor vim, take a look at our tutorial on Installing and Using the Vim Text Editor on a Cloud Server.

      Once you have created the repository file, enable it with the following command:

      • sudo yum makecache --disablerepo='*' --enablerepo='mariadb'

      The makecache command caches the repository metadata so that the package manager can install MariaDB, with --disablerepo and --enablerepo targeting the command to the mariadb repo file that you just created.

      You will receive the following output:

      Output

      Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile mariadb | 2.9 kB 00:00:00 (1/3): mariadb/primary_db | 43 kB 00:00:00 (2/3): mariadb/other_db | 8.3 kB 00:00:00 (3/3): mariadb/filelists_db | 238 kB 00:00:00 Metadata Cache Created

      Once you have enabled the repository on your first server, repeat for your second and third servers.

      Now that you have successfully added the package repository on all three of your servers, you’re ready to install MariaDB in the next section.

      Step 2 — Installing MariaDB on All Servers

      In this step, you will install the actual MariaDB packages on your three servers.

      Beginning with version 10.1, the MariaDB Server and MariaDB Galera Server packages are combined, so installing MariaDB-server will automatically install Galera and several dependencies:

      • sudo yum install MariaDB-server MariaDB-client

      You will be asked to confirm whether you would like to proceed with the installation. Enter yes to continue with the installation. You will then be prompted to accept the GPG key that authenticates the MariaDB package. Enter yes again.

      When the installation is complete, start the mariadb service by running:

      • sudo systemctl start mariadb

      Enable the mariadb service to be automatically started on boot by executing:

      • sudo systemctl enable mariadb

      From MariaDB version 10.4 onwards, the root MariaDB user does not have a password by default. To set a password for the root user, start by logging into MariaDB:

      Once you're inside the MariaDB shell, change the password by executing the following statement, replacing your_password with your desired password:

      • set password = password("your_password");

      You will see the following output indicating that the password was set correctly:

      Output

      Query OK, 0 rows affected (0.001 sec)

      Exit the MariaDB shell by running the following command:

      If you would like to learn more about SQL or need a quick refresher, check out our MySQL tutorial.

      You now have all of the pieces necessary to begin configuring the cluster, but since you'll be relying on rsync and policycoreutils-python in later steps to sync the servers and to control Security-Enhanced Linux (SELinux), make sure they're installed before moving on:

      • sudo yum install rsync policycoreutils-python

      This will confirm that the newest versions of rsync and policycoreutils-python is already available or will prompt you to upgrade or install it.

      Once you have completed these steps, repeat them for your other two servers.

      Now that you have installed MariaDB successfully on each of the three servers, you can proceed to the configuration step in the next section.

      Step 3 — Configuring the First Node

      In this step you will configure your first Galera node. Each node in the cluster needs to have a nearly identical configuration. Because of this, you will do all of the configuration on your first machine, and then copy it to the other nodes.

      By default, MariaDB is configured to check the /etc/mysql/conf.d directory to get additional configuration settings from files ending in .cnf. Create a file in this directory with all of your cluster-specific directives:

      • sudo vi /etc/my.cnf.d/galera.cnf

      Add the following configuration into the file. The configuration specifies different cluster options, details about the current server and the other servers in the cluster, and replication-related settings. Note that the IP addresses in the configuration are the private addresses of your respective servers; replace the highlighted lines with the appropriate IP addresses:

      /etc/my.cnf.d/galera.cnf

      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_on=ON
      wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so
      
      # Galera Cluster Configuration
      wsrep_cluster_name="test_cluster"
      wsrep_cluster_address="gcomm://First_Node_IP,Second_Node_IP,Third_Node_IP"
      
      # Galera Synchronization Configuration
      wsrep_sst_method=rsync
      
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      
      • The first section modifies or re-asserts MariaDB/MySQL settings that will allow the cluster to function correctly. For example, Galera won’t work with MyISAM or similar non-transactional storage engines, and mysqld must not be bound to the IP address for localhost.
      • The "Galera Provider Configuration" section configures the MariaDB components that provide a WriteSet replication API. This means Galera in your case, since Galera is a wsrep (WriteSet Replication) provider. You specify the general parameters to configure the initial replication environment. This doesn't require any customization, but you can learn more about Galera configuration options here.
      • The "Galera Cluster Configuration" section defines the cluster, identifying the cluster members by IP address or resolvable domain name and creating a name for the cluster to ensure that members join the correct group. You can change the wsrep_cluster_name to something more meaningful than test_cluster or leave it as-is, but you must update wsrep_cluster_address with the private IP addresses of your three servers.
      • The "Galera Synchronization Configuration" section defines how the cluster will communicate and synchronize data between members. This is used only for the state transfer that happens when a node comes online. For your initial setup, you are using rsync, because it's commonly available and does what you'll need for now.
      • The "Galera Node Configuration" section clarifies the IP address and the name of the current server. This is helpful when trying to diagnose problems in logs and for referencing each server in multiple ways. The wsrep_node_address must match the address of the machine you're on, but you can choose any name you want in order to help you identify the node in log files.

      When you are satisfied with your cluster configuration file, copy the contents into your clipboard and save and close the file.

      Now that you have configured your first node successfully, you can move on to configuring the remaining nodes in the next section.

      Step 4 — Configuring the Remaining Nodes

      In this step, you will configure the remaining two nodes. On your second node, open the configuration file:

      • sudo vi /etc/mysql/my.cnf.d/galera.cnf

      Paste in the configuration you copied from the first node, then update the Galera Node Configuration to use the IP address or resolvable domain name for the specific node you're setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:

      /etc/mysql/my.cnf.d/galera.cnf

      . . .
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      . . .
      

      Save and exit the file.

      Once you have completed these steps, repeat them on the third node.

      With Galera configured on all of your nodes, you're almost ready to bring up the cluster. But before you do, make sure that the appropriate ports are open in your firewall and that a SELinux policy has been created for Galera.

      Step 5 — Opening the Firewall on Every Server

      In this step, you will configure your firewall so that the ports required for inter-node communication are open.

      On every server, check the status of the firewall you set up in the Prerequisites section by running:

      • sudo firewall-cmd --list-all

      In this case, only SSH, DHCP, HTTP, and HTTPS traffic is allowed through:

      Output

      public target: default icmp-block-inversion: no interfaces: sources: services: ssh dhcpv6-client http https ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:

      If you tried to start the cluster now, it would fail because the firewall would block the connections between the nodes. To solve this problem, add rules to allow MariaDB and Galera traffic through.

      Galera can make use of four ports:

      • 3306 For MariaDB client connections and State Snapshot Transfer that use the mysqldump method.
      • 4567 For Galera Cluster replication traffic. Multicast replication uses both UDP transport and TCP on this port.
      • 4568 For Incremental State Transfers, or IST, the process by which a missing state is received by other nodes in the cluster.
      • 4444 For all other State Snapshot Transfers, or SST, the mechanism by which a joiner node gets its state and data from a donor node.

      In this example, you’ll open all four ports while you do your setup. Once you've confirmed that replication is working, you'd want to close any ports you're not actually using and restrict traffic to just servers in the cluster.

      Open the ports with the following commands:

      • sudo firewall-cmd --permanent --zone=public --add-port=3306/tcp
      • sudo firewall-cmd --permanent --zone=public --add-port=4567/tcp
      • sudo firewall-cmd --permanent --zone=public --add-port=4568/tcp
      • sudo firewall-cmd --permanent --zone=public --add-port=4444/tcp
      • sudo firewall-cmd --permanent --zone=public --add-port=4567/udp

      Using --zone=public and --add-port= here, firewall-cmd is opening up these ports to public traffic. --permanent ensures that these rules persist.

      Note: Depending on what else is running on your servers you might want to restrict access right away. To learn more about how to use FirewallD, see our tutorial How To Set Up a Firewall Using FirewallD on CentOS 7.

      Now, add each server to the public zone by executing the following commands, replacing the highlighted text with the respective private IP addresses of your nodes:

      • sudo firewall-cmd --permanent --zone=public --add-source=galera-node-1-ip/32
      • sudo firewall-cmd --permanent --zone=public --add-source=galera-node-2-ip/32
      • sudo firewall-cmd --permanent --zone=public --add-source=galera-node-3-ip/32

      Reload the firewall to apply the changes:

      • sudo firewall-cmd --reload

      After you have configured your firewall on the first node, create the same firewall settings on the second and third node.

      Now that you have configured the firewalls successfully, you're ready to create a SELinux policy in the next step.

      Step 6 — Creating a SELinux Policy

      In this section, you will create a SELinux policy that will allow all the nodes in the cluster to be able to communicate with each other and perform cluster operations.

      SELinux is a Linux kernel module that improves the security of operating systems with its support for access control and mandatory access control policies. It is enabled by default on CentOS 7 and restricts the MariaDB daemon from performing many activities.

      In order to create the policy, you will perform various activities on the cluster with the SELinux mode set to permissive for MySQL. You will then create a policy from the logged events and finally set the SELinux mode to enforcing once the policy is installed successfully.

      First, allow access to the relevant ports by running the following commands on all three servers:

      • sudo semanage port -a -t mysqld_port_t -p tcp 4567
      • sudo semanage port -a -t mysqld_port_t -p udp 4567
      • sudo semanage port -a -t mysqld_port_t -p tcp 4568
      • sudo semanage port -a -t mysqld_port_t -p tcp 4444

      Note: You may receive a ValueError when allowing access to some of these ports. This means that the SELinux status of that port has already been set, which in this case will not affect the process of this tutorial.

      In these commands, you are using the SELinux management tool semanage with the -a flag to add specified ports and to ignore the database server.

      Next, run the following command on all three servers, which sets the MySQL SELinux domain to permissive mode temporarily.

      • sudo semanage permissive -a mysqld_t

      This command can take a minute to complete and will not display any output.

      Next, stop the database server on all the nodes so that you will be able to bootstrap the database cluster with shared SELinux policies. To do this, run the following command on all three nodes:

      • sudo systemctl stop mariadb

      Now, bootstrap the cluster to generate inter-node communication events that will be added to the SELinux policy. On the first node, bootstrap the cluster by executing:

      Create a database and table for the specific purpose of logging SST events by running the following on the first node:

      • mysql -u root -p -e 'CREATE DATABASE selinux;
      • CREATE TABLE selinux.selinux_policy (id INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(id));
      • INSERT INTO selinux.selinux_policy VALUES ();'

      Now start the server on the second node:

      • sudo systemctl start mariadb

      Then do the same on the third node:

      • sudo systemctl start mariadb

      You will not see any output for the previous commands. To generate IST events, execute the following on all three servers:

      • mysql -u root -p -e 'INSERT INTO selinux.selinux_policy VALUES ();'

      Now create and enable the SELinux policy by executing the following commands on all three servers:

      • sudo grep mysql /var/log/audit/audit.log | sudo audit2allow -M Galera

      This first command searches for generated events in the audit.log file and pipes them to a module named Galera.pp generated by the audit2allow tool. This will result in the following output:

      Output

      ******************** IMPORTANT *********************** To make this policy package active, execute: semodule -i Galera.pp

      Next, follow the instructions in the output and use the following command to install the generated module:

      • sudo semodule -i Galera.pp

      Now that the policy is active, disable permissive mode for the MariaDB server:

      • sudo semanage permissive -d mysqld_t

      Now that you have successfully created a SELinux policy and enabled it, you are ready to start the cluster in the next section.

      Step 7 — Starting the Cluster

      In this step, you will start your MariaDB cluster. To begin, you need to stop the running MariaDB service so that you can bring your cluster online.

      Stop MariaDB on All Three Servers

      When stopping the MariaDB service, it is important to execute this action on your servers in a specific order. This shutdown sequence ensures that the first node will be able to safely bootstrap the cluster when it starts up.

      First, run the following command on the third node:

      • sudo systemctl stop mariadb

      Next, stop the service on the second node:

      • sudo systemctl stop mariadb

      Finally, stop the service on the first node:

      • sudo systemctl stop mariadb

      systemctl doesn't display the outcome of all service management commands, so to be sure you succeeded, use the following command on each of your servers:

      • sudo systemctl status mariadb

      The last line will look something like the following:

      Output

      . . . Apr 26 03:34:23 galera-node-01 systemd[1]: Stopped MariaDB 10.4.4 database server.

      Once you've shut down mariadb on all of the servers, you're ready to proceed.

      Bring Up the First Node

      To bring up the first node, you'll need to use a special startup script. The way you've configured your cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf file to get its initial state. Without using the galera_new_cluster script that allows systemd to pass the --wsrep-new-cluster parameter, a normal systemctl start mariadb would fail because there are no nodes running for the first node to connect with.

      This command will not display any output on successful execution. When this script succeeds, the node is registered as part of the cluster, and you can see it with the following command:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that there is one node in the cluster:

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+

      On the remaining nodes, you can start mariadb normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.

      Bring Up the Second Node

      Now you can bring up the second node. Start mariadb:

      • sudo systemctl start mariadb

      No output will be displayed on successful execution. You will see your cluster size increase as each node comes online:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that the second node has joined the cluster and that there are two nodes in total.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 2 | +--------------------+-------+

      Bring Up the Third Node

      It's now time to bring up the third node. Start mariadb:

      • sudo systemctl start mariadb

      Run the following command to find the cluster size:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output, which indicates that the third node has joined the cluster and that the total number of nodes in the cluster is three.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+

      At this point, the entire cluster is online and communicating successfully. Next, you can ensure the working setup by testing replication in the next section.

      Step 8 — Testing Replication

      You've gone through the steps up to this point so that your cluster can perform replication from any node to any other node, known as active-active replication. Follow the following steps to test and see if the replication is working as expected.

      Write to the First Node

      You'll start by making database changes on your first node. The following commands will create a database called playground and a table inside of this database called equipment.

      • mysql -u root -p -e 'CREATE DATABASE playground;
      • CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
      • INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");'

      In the previous command, the CREATE DATABASE statement creates a database named playground. The CREATE statement creates a table named equipment inside the playground database having an auto-incrementing identifier column called id and other columns. The type column, quant column, and color column are defined to store the type, quantity, and color of the equipment respectively. The INSERT statement inserts an entry of type slide, quantity 2, and color blue.

      You now have one value in your table.

      Read and Write on the Second Node

      Next, look at the second node to verify that replication is working:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      If replication is working, the data you entered on the first node will be visible here on the second:

      Output

      +----+-------+-------+-------+ | id | type | quant | color | +----+-------+-------+-------+ | 1 | slide | 2 | blue | +----+-------+-------+-------+

      From this same node, you can write data to the cluster:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'

      Read and Write on the Third Node

      From the third node, you can read all of this data by querying the table again:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output showing the two rows:

      Output

      +----+-------+-------+--------+ | id | type | quant | color | +----+-------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | +----+-------+-------+--------+

      Again, you can add another value from this node:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");'

      Read on the First Node

      Back on the first node, you can verify that your data is available everywhere:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output, which indicates that the rows are available on the first node.

      Output

      +----+--------+-------+--------+ | id | type | quant | color | +----+--------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | | 3 | seesaw | 3 | green | +----+--------+-------+--------+

      You've verified successfully that you can write to all of the nodes and that replication is being performed properly.

      Conclusion

      At this point, you have a working three-node Galera test cluster configured. If you plan on using a Galera cluster in a production situation, it’s recommended that you begin with no fewer than five nodes.

      Before production use, you may want to take a look at some of the other state snapshot transfer (SST) agents like XtraBackup, which allows you to set up new nodes very quickly and without large interruptions to your active nodes. This does not affect the actual replication, but is a concern when nodes are being initialized.

      If you would like to continue learning about SQL databases, take a look at our How To Manage an SQL Database article.



      Source link

      How To Install and Configure Postfix as a Send-Only SMTP Server on Debian 10


      Introduction

      Postfix is a mail transfer agent (MTA), an application used to send and receive email. In this tutorial, you will install and configure Postfix so that it can be used to send emails by local applications only — that is, those installed on the same server as Postfix.

      Why would you want to do that?

      If you’re already using a third-party email provider for sending and receiving emails, you do not need to run your own mail server. However, if you manage a cloud server on which you have installed applications that need to send email notifications, running a local, send-only SMTP server is a good alternative to using a third-party email service provider or running a full-blown SMTP server.

      In this tutorial, you’ll install and configure Postfix as a send-only SMTP server on Debian 10.

      Prerequisites

      To follow this tutorial, you will need:

      Note that your server’s hostname should match your domain or subdomain. You can verify the server’s hostname by typing hostname at the command prompt. The output should match the name you gave the server when it was being created.

      Step 1 — Installing Postfix

      In this step, you’ll learn how to install Postfix. You will need two packages: mailutils, which includes programs necessary for Postfix to function, and postfix itself.

      First, update the package database:

      Next, install mailtuils:

      • sudo apt install mailutils

      Finally, install postfix:

      Near the end of the installation process, you will be presented with a window that looks like the one in the image below:

      Initial Config Screen

      Press ENTER to continue.

      The default option is Internet Site, which is preselected on the following screen:

      Config Selection Screen

      Press ENTER to continue.

      After that, you'll get another window to set the System mail name:

      System Mail Name Selection

      The System mail name should be the same as the name you assigned to the server when you were creating it. If it shows a subdomain like subdomain.example.com, change it to just example.com. When you've finished, press TAB, then ENTER.

      You now have Postfix installed and are ready to modify its configuration settings.

      Step 2 — Configuring Postfix

      In this step, you'll configure Postfix to process requests to send emails only from the server on which it is running, i.e. from localhost.

      For that to happen, Postfix needs to be configured to listen only on the loopback interface, the virtual network interface that the server uses to communicate internally. To make the change, open the main Postfix configuration file using nano or your favorite text editor:

      • sudo nano /etc/postfix/main.cf

      With the file open, scroll down until you see the following section:

      /etc/postfix/main.cf

      . . .
      mailbox_size_limit = 0
      recipient_delimiter = +
      inet_interfaces = all
      . . .
      

      Change the line that reads inet_interfaces = all to inet_interfaces = loopback-only:

      /etc/postfix/main.cf

      . . .
      mailbox_size_limit = 0
      recipient_delimiter = +
      inet_interfaces = loopback-only
      . . .
      

      Another directive you'll need to modify is mydestination, which is used to specify the list of domains that are delivered via the local_transport mail delivery transport. By default, the values are similar to these:

      /etc/postfix/main.cf

      . . . mydestination = $myhostname, example.com, localhost.com, , localhost . . .

      The recommended defaults for this directive are given in the code block below, so modify yours to match:

      /etc/postfix/main.cf

      . . . mydestination = $myhostname, localhost.$mydomain, $mydomain . . .

      Save and close the file.

      Note: If you're hosting multiple domains on a single server, the other domains can also be passed to Postfix using the mydestination directive. However, to configure Postfix in a manner that scales and that does not present issues for such a setup involves additional configurations that are beyond the scope of this article.

      Finally, restart Postfix.

      • sudo systemctl restart postfix

      Step 3 — Testing the SMTP Server

      In this step, you'll test whether Postfix can send emails to an external email account using the mail command, which is part of the mailutils package you installed in Step 1.

      To send a test email, type:

      • echo "This is the body of the email" | mail -s "This is the subject line" your_email_address

      In performing your own test(s), you may use the body and subject line text as-is, or change them to your liking. However, in place of your_email_address, use a valid email address. The domain can be gmail.com, fastmail.com, yahoo.com, or any other email service provider that you use.

      Now check the email address where you sent the test message. You should see the message in your Inbox. If not, check your Spam folder.

      Note that with this configuration, the address in the From field for the test emails you send will be sammy@example.com, where sammy is your Linux non-root username and the domain is the server's hostname. If you change your username, the From address will also change.

      Step 4 — Forwarding System Mail

      The last thing we want to set up is forwarding, so you'll get emails sent to root on the system at your personal, external email address.

      To configure Postfix so that system-generated emails will be sent to your email address, you need to edit the /etc/aliases file. Open that file now:

      The full contents of the file on a default installation of Debian 10 are as follows:

      /etc/aliases

      mailer-daemon: postmaster
      postmaster: root
      nobody: root
      hostmaster: root
      usenet: root
      news: root
      webmaster: root
      www: root
      ftp: root
      abuse: root
      noc: root
      security: root
      

      The postmaster: root setting ensures that system-generated emails are sent to the root user. You want to edit these settings so these emails are rerouted to your email address. To accomplish that, add the following line below the postmaster: root setting:

      /etc/aliases

      mailer-daemon: postmaster
      postmaster:    root
      root:          your_email_address
      . . .
      

      Replace your_email_address with your personal email address. When finished, save and close the file. For the change to take effect, run the following command:

      You can test that it works by sending an email to the root account using:

      • echo "This is the body of the email" | mail -s "This is the subject line" root

      You should receive the email at your email address. If not, check your Spam folder.

      Conclusion

      That's all it takes to set up a send-only email server using Postfix. You may want to take some additional steps to protect your domain from spammers, however.

      If you want to receive notifications from your server at a single address, then having emails marked as Spam is less of an issue because you can create a whitelist workaround. However, if you want to send emails to potential site users (such as confirmation emails for a message board sign-up), you should definitely set up SPF records and DKIM so your server's emails are more likely to be seen as legitimate.

      If configured correctly, these steps make it difficult to send Spam with an address that appears to originate from your domain. Taking these additional configuration steps will also make it more likely for common mail providers to see emails from your server as legitimate.



      Source link