One place for hosting & domains

      Install

      How to Install and Secure phpMyAdmin with Nginx on an Ubuntu 18.04 server


      Introduction

      While many users need the functionality of a database system like MySQL, interacting with the system solely from the MySQL command-line client requires familiarity with the SQL language, so it may not be the preferred interface for some.

      phpMyAdmin was created so that users can interact with MySQL through an intuitive web interface, running alongside a PHP development environment. In this guide, we’ll discuss how to install phpMyAdmin on top of an Nginx server, and how to configure the server for increased security.

      Note: There are important security considerations when using software like phpMyAdmin, since it runs on the database server, it deals with database credentials, and it enables a user to easily execute arbitrary SQL queries into your database. Because phpMyAdmin is a widely-deployed PHP application, it is frequently targeted for attack. We will go over some security measures you can take in this tutorial so that you can make informed decisions.

      Prerequisites

      Before you get started with this guide, you’ll need the following available to you:

      Because phpMyAdmin handles authentication using MySQL credentials, it is strongly advisable to install an SSL/TLS certificate to enable encrypted traffic between server and client. If you do not have an existing domain configured with a valid certificate, you can follow this guide on securing Nginx with Let’s Encrypt on Ubuntu 18.04.

      Warning: If you don’t have an SSL/TLS certificate installed on the server and you still want to proceed, please consider enforcing access via SSH Tunnels as explained in Step 5 of this guide.

      Once you have met these prerequisites, you can go ahead with the rest of the guide.

      Step 1 — Installing phpMyAdmin

      The first thing we need to do is install phpMyAdmin on the LEMP server. We’re going to use the default Ubuntu repositories to achieve this goal.

      Let’s start by updating the server’s package index with:

      Now you can install phpMyAdmin with:

      • sudo apt install phpmyadmin

      During the installation process, you will be prompted to choose the web server (either Apache or Lighthttp) to configure. Because we are using Nginx as web server, we shouldn't make a choice here. Press tab and then OK to advance to the next step.

      Next, you’ll be prompted whether to use dbconfig-common for configuring the application database. Select Yes. This will set up the internal database and administrative user for phpMyAdmin. You will be asked to define a new password for the phpmyadmin MySQL user. You can also leave it blank and let phpMyAdmin randomly create a password.

      The installation will now finish. For the Nginx web server to find and serve the phpMyAdmin files correctly, we’ll need to create a symbolic link from the installation files to Nginx's document root directory:

      • sudo ln -s /usr/share/phpmyadmin /var/www/html

      Your phpMyAdmin installation is now operational. To access the interface, go to your server's domain name or public IP address followed by /phpmyadmin in your web browser:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin login screen

      As mentioned before, phpMyAdmin handles authentication using MySQL credentials, which means you should use the same username and password you would normally use to connect to the database via console or via an API. If you need help creating MySQL users, check this guide on How To Manage an SQL Database.

      Note: Logging into phpMyAdmin as the root MySQL user is discouraged because it represents a significant security risk. We'll see how to disable root login in a subsequent step of this guide.

      Your phpMyAdmin installation should be completely functional at this point. However, by installing a web interface, we've exposed our MySQL database server to the outside world. Because of phpMyAdmin's popularity, and the large amounts of data it may provide access to, installations like these are common targets for attacks. In the following sections of this guide, we'll see a few different ways in which we can make our phpMyAdmin installation more secure.

      Step 2 — Changing phpMyAdmin's Default Location

      One of the most basic ways to protect your phpMyAdmin installation is by making it harder to find. Bots will scan for common paths, like phpmyadmin, pma, admin, mysql and such. Changing the interface's URL from /phpmyadmin to something non-standard will make it much harder for automated scripts to find your phpMyAdmin installation and attempt brute-force attacks.

      With our phpMyAdmin installation, we've created a symbolic link pointing to /usr/share/phpmyadmin, where the actual application files are located. To change phpMyAdmin's interface URL, we will rename this symbolic link.

      First, let's navigate to the Nginx document root directory and list the files it contains to get a better sense of the change we'll make:

      You’ll receive the following output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 phpmyadmin -> /usr/share/phpmyadmin

      The output shows that we have a symbolic link called phpmyadmin in this directory. We can change this link name to whatever we'd like. This will in turn change phpMyAdmin's access URL, which can help obscure the endpoint from bots hardcoded to search common endpoint names.

      Choose a name that obscures the purpose of the endpoint. In this guide, we'll name our endpoint /nothingtosee, but you should choose an alternate name. To accomplish this, we'll rename the link:

      • sudo mv phpmyadmin nothingtosee
      • ls -l

      After running the above commands, you’ll receive this output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 nothingtosee -> /usr/share/phpmyadmin

      Now, if you go to the old URL, you'll get a 404 error:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin 404 error

      Your phpMyAdmin interface will now be available at the new URL we just configured:

      https://server_domain_or_IP/nothingtosee
      

      phpMyAdmin login screen

      By obfuscating phpMyAdmin's real location on the server, you're securing its interface against automated scans and manual brute-force attempts.

      Step 3 — Disabling Root Login

      On MySQL as well as within regular Linux systems, the root account is a special administrative account with unrestricted access to the system. In addition to being a privileged account, it's a known login name, which makes it an obvious target for brute-force attacks. To minimize risks, we'll configure phpMyAdmin to deny any login attempts coming from the user root. This way, even if you provide valid credentials for the user root, you'll still get an "access denied" error and won't be allowed to log in.

      Because we chose to use dbconfig-common to configure and store phpMyAdmin settings, the default configuration is currently stored in the database. We'll need to create a new config.inc.php file to define our custom settings.

      Even though the PHP files for phpMyAdmin are located inside /usr/share/phpmyadmin, the application uses configuration files located at /etc/phpmyadmin. We will create a new custom settings file inside /etc/phpmyadmin/conf.d, and name it pma_secure.php:

      • sudo nano /etc/phpmyadmin/conf.d/pma_secure.php

      The following configuration file contains the necessary settings to disable passwordless logins (AllowNoPassword set to false) and root login (AllowRoot set to false):

      /etc/phpmyadmin/conf.d/pma_secure.php

      <?php
      
      # PhpMyAdmin Settings
      # This should be set to a random string of at least 32 chars
      $cfg['blowfish_secret'] = '3!#32@3sa(+=_4?),5XP_:U%%834sdfSdg43yH#{o';
      
      $i=0;
      $i++;
      
      $cfg['Servers'][$i]['auth_type'] = 'cookie';
      $cfg['Servers'][$i]['AllowNoPassword'] = false;
      $cfg['Servers'][$i]['AllowRoot'] = false;
      
      ?>
      

      Save the file when you're done editing by pressing CTRL + X then y to confirm changes and ENTER. The changes will apply automatically. If you reload the login page now and try to log in as root, you will get an Access Denied error:

      access denied

      Root login is now prohibited on your phpMyAdmin installation. This security measure will block brute-force scripts from trying to guess the root database password on your server. Moreover, it will enforce the usage of less-privileged MySQL accounts for accessing phpMyAdmin's web interface, which by itself is an important security practice.

      Step 4 — Creating an Authentication Gateway

      Hiding your phpMyAdmin installation on an unusual location might sidestep some automated bots scanning the network, but it's useless against targeted attacks. To better protect a web application with restricted access, it's generally more effective to stop attackers before they can even reach the application. This way, they'll be unable to use generic exploits and brute-force attacks to guess access credentials.

      In the specific case of phpMyAdmin, it's even more important to keep the login interface locked away. By keeping it open to the world, you're offering a brute-force platform for attackers to guess your database credentials.

      Adding an extra layer of authentication to your phpMyAdmin installation enables you to increase security. Users will be required to pass through an HTTP authentication prompt before ever seeing the phpMyAdmin login screen. Most web servers, including Nginx, provide this capability natively.

      To set this up, we first need to create a password file to store the authentication credentials. Nginx requires that passwords be encrypted using the crypt() function. The OpenSSL suite, which should already be installed on your server, includes this functionality.

      To create an encrypted password, type:

      You will be prompted to enter and confirm the password that you wish to use. The utility will then display an encrypted version of the password that will look something like this:

      Output

      O5az.RSPzd.HE

      Copy this value, as you will need to paste it into the authentication file we'll be creating.

      Now, create an authentication file. We'll call this file pma_pass and place it in the Nginx configuration directory:

      • sudo nano /etc/nginx/pma_pass

      In this file, you’ll specify the username you would like to use, followed by a colon (:), followed by the encrypted version of the password you received from the openssl passwd utility.

      We are going to name our user sammy, but you should choose a different username. The file should look like this:

      /etc/nginx/pma_pass

      sammy:O5az.RSPzd.HE
      

      Save and close the file when you're done.

      Now we're ready to modify the Nginx configuration file. For this guide, we'll use the configuration file located at /etc/nginx/sites-available/example.com. You should use the relevant Nginx configuration file for the web location where phpMyAdmin is currently hosted. Open this file in your text editor to get started:

      • sudo nano /etc/nginx/sites-available/example.com

      Locate the server block, and the location / section within it. We need to create a new location section within this block to match phpMyAdmin's current path on the server. In this guide, phpMyAdmin's location relative to the web root is /nothingtosee:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location / {
                      try_files $uri $uri/ =404;
              }
      
              location /nothingtosee {
                      # Settings for phpMyAdmin will go here
              }
      
          . . .
      }
      

      Within this block, we'll need to set up two different directives: auth_basic, which defines the message that will be displayed on the authentication prompt, and auth_basic_user_file, pointing to the file we just created. This is how your configuration file should look like when you're finished:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location /nothingtosee {
                      auth_basic "Admin Login";
                      auth_basic_user_file /etc/nginx/pma_pass;
              }
      
      
          . . .
      }
      

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      To activate the new authentication gate, you must reload the web server:

      • sudo systemctl reload nginx

      Now, if you visit the phpMyAdmin URL in your web browser, you should be prompted for the username and password you added to the pma_pass file:

      https://server_domain_or_IP/nothingtosee
      

      Nginx authentication page

      Once you enter your credentials, you'll be taken to the standard phpMyAdmin login page.

      Note: If refreshing the page does not work, you may have to clear your cache or use a different browser session if you've already been using phpMyAdmin.

      In addition to providing an extra layer of security, this gateway will help keep your MySQL logs clean of spammy authentication attempts.

      Step 5 — Setting Up Access via Encrypted Tunnels (Optional)

      For increased security, it is possible to lock down your phpMyAdmin installation to authorized hosts only. You can whitelist authorized hosts in your Nginx configuration file, so that any request coming from an IP address that is not on the list will be denied.

      Even though this feature alone can be enough in some use cases, it's not always the best long-term solution, mainly due to the fact that most people don't access the Internet from static IP addresses. As soon as you get a new IP address from your Internet provider, you'll be unable to get to the phpMyAdmin interface until you update the Nginx configuration file with your new IP address.

      For a more robust long-term solution, you can use IP-based access control to create a setup in which users will only have access to your phpMyAdmin interface if they're accessing from either an authorized IP address or localhost via SSH tunneling. We'll see how to set this up in the sections below.

      Combining IP-based access control with SSH tunneling greatly increases security because it fully blocks access coming from the public internet (except for authorized IPs), in addition to providing a secure channel between user and server through the use of encrypted tunnels.

      Setting Up IP-Based Access Control on Nginx

      On Nginx, IP-based access control can be defined in the corresponding location block of a given site, using the directives allow and deny. For instance, if we want to only allow requests coming from a given host, we should include the following two lines, in this order, inside the relevant location block for the site we would like to protect:

      allow hostname_or_IP;
      deny all;
      

      You can allow as many hosts as you want, you only need to include one allow line for each authorized host/IP inside the respective location block for the site you're protecting. The directives will be evaluated in the same order as they are listed, until a match is found or the request is finally denied due to the deny all directive.

      We'll now configure Nginx to only allow requests coming from localhost or your current IP address. First, you'll need to know the current public IP address your local machine is using to connect to the Internet. There are various ways to obtain this information; for simplicity, we're going to use the service provided by ipinfo.io. You can either open the URL https://ipinfo.io/ip in your browser, or run the following command from your local machine:

      • curl https://ipinfo.io/ip

      You should get a simple IP address as output, like this:

      Output

      203.0.113.111

      That is your current public IP address. We'll configure phpMyAdmin's location block to only allow requests coming from that IP, in addition to localhost. We'll need to edit once again the configuration block for phpMyAdmin inside /etc/nginx/sites-available/example.com.

      Open the Nginx configuration file using your command-line editor of choice:

      • sudo nano /etc/nginx/sites-available/example.com

      Because we already have an access rule within our current configuration, we need to combine it with IP-based access control using the directive satisfy all. This way, we can keep the current HTTP authentication prompt for increased security.

      This is how your phpMyAdmin Nginx configuration should look like after you're done editing:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee {
              satisfy all; #requires both conditions
      
              allow 203.0.113.111; #allow your IP
              allow 127.0.0.1; #allow localhost via SSH tunnels
              deny all; #deny all other sources
      
              auth_basic "Admin Login";
              auth_basic_user_file /etc/nginx/pma_pass;
          }
      
          . . .
      }
      

      Remember to replace nothingtosee with the actual path where phpMyAdmin can be found, and the highlighted IP address with your current public IP address.

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      Now reload the web server so the changes take effect:

      • sudo systemctl reload nginx

      Because your IP address is explicitly listed as an authorized host, your access shouldn't be disturbed. Anyone else trying to access your phpMyAdmin installation will now get a 403 error (Forbidden):

      https://server_domain_or_IP/nothingtosee
      

      403 error

      In the next section, we'll see how to use SSH tunneling to access the web server through local requests. This way, you'll still be able to access phpMyAdmin's interface even when your IP address changes.

      Accessing phpMyAdmin Through an Encrypted Tunnel

      SSH tunneling works as a way of redirecting network traffic through encrypted channels. By running an ssh command similar to what you would use to log into a server, you can create a secure "tunnel" between your local machine and that server. All traffic coming in on a given local port can now be redirected through the encrypted tunnel and use the remote server as a proxy, before reaching out to the internet. It's similar to what happens when you use a VPN (Virtual Private Network), however SSH tunneling is much simpler to set up.

      We'll use SSH tunneling to proxy our requests to the remote web server running phpMyAdmin. By creating a tunnel between your local machine and the server where phpMyAdmin is installed, you can redirect local requests to the remote web server, and what's more important, traffic will be encrypted and requests will reach Nginx as if they're coming from localhost. This way, no matter what IP address you're connecting from, you'll be able to securely access phpMyAdmin's interface.

      Because the traffic between your local machine and the remote web server will be encrypted, this is a safe alternative for situations where you can't have an SSL/TLS certificate installed on the web server running phpMyAdmin.

      From your local machine, run this command whenever you need access to phpMyAdmin:

      • ssh user@server_domain_or_IP -L 8000:localhost:80 -L 8443:localhost:443 -N

      Let's examine each part of the command:

      • user: SSH user to connect to the server where phpMyAdmin is running
      • hostname_or_IP: SSH host where phpMyAdmin is running
      • -L 8000:localhost:80 redirects HTTP traffic on port 8000
      • -L 8443:localhost:443 redirects HTTPS traffic on port 8443
      • -N: do not execute remote commands

      Note: This command will block the terminal until interrupted with a CTRL+C, in which case it will end the SSH connection and stop the packet redirection. If you'd prefer to run this command in background mode, you can use the SSH option -f.

      Now, go to your browser and replace server_domain_or_IP with localhost:PORT, where PORT is either 8000 for HTTP or 8443 for HTTPS:

      http://localhost:8000/nothingtosee
      
      https://localhost:443/nothingtosee
      

      phpMyAdmin login screen

      Note: If you're accessing phpMyAdmin via https, you might get an alert message questioning the security of the SSL certificate. This happens because the domain name you're using (localhost) doesn't match the address registered within the certificate (domain where phpMyAdmin is actually being served). It is safe to proceed.

      All requests on localhost:8000 (HTTP) and localhost:8443 (HTTPS) are now being redirected through a secure tunnel to your remote phpMyAdmin application. Not only have you increased security by disabling public access to your phpMyAdmin, you also protected all traffic between your local computer and the remote server by using an encrypted tunnel to send and receive data.

      If you'd like to enforce the usage of SSH tunneling to anyone who wants access to your phpMyAdmin interface (including you), you can do that by removing any other authorized IPs from the Nginx configuration file, leaving 127.0.0.1 as the only allowed host to access that location. Considering nobody will be able to make direct requests to phpMyAdmin, it is safe to remove HTTP authentication in order to simplify your setup. This is how your configuration file would look like in such a scenario:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee { 
              allow 127.0.0.1; #allow localhost only
              deny all; #deny all other sources
          }
      
          . . .
      }
      

      Once you reload Nginx's configuration with sudo systemctl reload nginx, your phpMyAdmin installation will be locked down and users will be required to use SSH tunnels in order to access phpMyAdmin's interface via redirected requests.

      Conclusion

      In this tutorial, we saw how to install phpMyAdmin on Ubuntu 18.04 running Nginx as the web server. We also covered advanced methods to secure a phpMyAdmin installation on Ubuntu, such as disabling root login, creating an extra layer of authentication, and using SSH tunneling to access a phpMyAdmin installation via local requests only.

      After completing this tutorial, you should be able to manage your MySQL databases from a reasonably secure web interface. This user interface exposes most of the functionality available via the MySQL command line. You can browse databases and schema, execute queries, and create new data sets and structures.



      Source link

      How To Install and Configure Zabbix to Securely Monitor Remote Servers on Ubuntu 18.04


      The author selected the Open Source Initiative to receive a donation as part of the Write for DOnations program.

      Introduction

      Zabbix is open-source monitoring software for networks and applications. It offers real-time monitoring of thousands of metrics collected from servers, virtual machines, network devices, and web applications. These metrics can help you determine the current health of your IT infrastructure and detect problems with hardware or software components before customers complain. Useful information is stored in a database so you can analyze data over time and improve the quality of provided services, or plan upgrades of your equipment.

      Zabbix uses several options for collecting metrics, including agentless monitoring of user services and client-server architecture. To collect server metrics, it uses a small agent on the monitored client to gather data and send it to the Zabbix server. Zabbix supports encrypted communication between the server and connected clients, so your data is protected while it travels over insecure networks.

      The Zabbix server stores its data in a relational database powered by MySQL, PostgreSQL, or Oracle. You can also store historical data in nosql databases like Elasticsearch and TimescaleDB. Zabbix provides a web interface so you can view data and configure system settings.

      In this tutorial, you will configure two machines. One will be configured as the server, and the other as a client that you’ll monitor. The server will use a MySQL database to record monitoring data and use Apache to serve the web interface.

      Prerequisites

      To follow this tutorial, you will need:

      • Two Ubuntu 18.04 servers set up by following the Initial Server Setup Guide for Ubuntu 18.04, including a non-root user with sudo privileges and a firewall configured with ufw. On one server, you will install Zabbix; this tutorial will refer to this as the Zabbix server. It will monitor your second server; this second server will be referred to as the second Ubuntu server.

      • The server that will run the Zabbix server needs Apache, MySQL, and PHP installed. Follow this guide to configure those on your Zabbix server.

      Additionally, because the Zabbix Server is used to access valuable information about your infrastructure that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged. You can follow the Let’s Encrypt on Ubuntu 18.04 guide to obtain the free TLS/SSL certificate.

      Step 1 — Installing the Zabbix Server

      First, you need to install Zabbix on the server where you installed MySQL, Apache, and PHP. Log into this machine as your non-root user:

      • ssh sammy@zabbix_server_ip_address

      Zabbix is available in Ubuntu’s package manager, but it’s outdated, so use the official Zabbix repository to install the latest stable version. Download and install the repository configuration package:

      • wget https://repo.zabbix.com/zabbix/4.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.2-1+bionic_all.deb
      • sudo dpkg -i zabbix-release_4.2-1+bionic_all.deb

      You will see the following output:

      Output

      Selecting previously unselected package zabbix-release. (Reading database ... 61483 files and directories currently installed.) Preparing to unpack zabbix-release_4.2-1+bionic_all.deb ... Unpacking zabbix-release (4.2-1+bionicc) ... Setting up zabbix-release (4.2-1+bionicc) ...

      Update the package index so the new repository is included:

      Then install the Zabbix server and web frontend with MySQL database support:

      • sudo apt install zabbix-server-mysql zabbix-frontend-php

      Also, install the Zabbix agent, which will let you collect data about the Zabbix server status itself.

      • sudo apt install zabbix-agent

      Before you can use Zabbix, you have to set up a database to hold the data that the Zabbix server will collect from its agents. You can do this in the next step.

      Step 2 — Configuring the MySQL Database for Zabbix

      You need to create a new MySQL database and populate it with some basic information in order to make it suitable for Zabbix. You'll also create a specific user for this database so Zabbix isn't logging into MySQL with the root account.

      Log into MySQL as the root user using the root password that you set up during the MySQL server installation:

      Create the Zabbix database with UTF-8 character support:

      • create database zabbix character set utf8 collate utf8_bin;

      Then create a user that the Zabbix server will use, give it access to the new database, and set the password for the user:

      • grant all privileges on zabbix.* to zabbix@localhost identified by 'your_zabbix_mysql_password';

      Then apply these new permissions:

      That takes care of the user and the database. Exit out of the database console.

      Next you have to import the initial schema and data. The Zabbix installation provided you with a file that sets this up.

      Run the following command to set up the schema and import the data into the zabbix database. Use zcat since the data in the file is compressed.

      • zcat /usr/share/doc/zabbix-server-mysql/create.sql.gz | mysql -uzabbix -p zabbix

      Enter the password for the zabbix MySQL user that you configured when prompted.

      This command will not output any errors if it was successful. If you see the error ERROR 1045 (28000): Access denied for userzabbix@'localhost' (using password: YES) then make sure you used the password for the zabbix user and not the root user.

      In order for the Zabbix server to use this database, you need to set the database password in the Zabbix server configuration file. Open the configuration file in your preferred text editor. This tutorial will use nano:

      • sudo nano /etc/zabbix/zabbix_server.conf

      Look for the following section of the file:

      /etc/zabbix/zabbix_server.conf

      ### Option: DBPassword                           
      #       Database password. Ignored for SQLite.   
      #       Comment this line if no password is used.
      #                                                
      # Mandatory: no                                  
      # Default:                                       
      # DBPassword=
      

      These comments in the file explain how to connect to the database. You need to set the DBPassword value in the file to the password for your database user. Add this line below those comments to configure the database:

      /etc/zabbix/zabbix_server.conf

      ...
      DBPassword=your_zabbix_mysql_password
      

      Save and close zabbix_server.conf by pressing CTRL+X, followed by Y and then ENTER if you're using nano.

      That takes care of the Zabbix server configuration. Next, you will make some modifications to your PHP setup in order for the Zabbix web interface to work properly.

      Step 3 — Configuring PHP for Zabbix

      The Zabbix web interface is written in PHP and requires some special PHP server settings. The Zabbix installation process created an Apache configuration file that contains these settings. It is located in the directory /etc/zabbix and is loaded automatically by Apache. You need to make a small change to this file, so open it up with the following:

      • sudo nano /etc/zabbix/apache.conf

      The file contains PHP settings that meet the necessary requirements for the Zabbix web interface. However, the timezone setting is commented out by default. To make sure that Zabbix uses the correct time, you need to set the appropriate timezone.

      /etc/zabbix/apache.conf

      ...
      <IfModule mod_php7.c>
          php_value max_execution_time 300
          php_value memory_limit 128M
          php_value post_max_size 16M
          php_value upload_max_filesize 2M
          php_value max_input_time 300
          php_value always_populate_raw_post_data -1
          # php_value date.timezone Europe/Riga
      </IfModule>
      

      Uncomment the timezone line, highlighted in the preceding code block, and change it to your timezone. You can use this list of supported time zones to find the right one for you. Then save and close the file.

      Now restart Apache to apply these new settings.

      • sudo systemctl restart apache2

      You can now start the Zabbix server.

      • sudo systemctl start zabbix-server

      Then check whether the Zabbix server is running properly:

      • sudo systemctl status zabbix-server

      You will see the following status:

      Output

      ● zabbix-server.service - Zabbix Server Loaded: loaded (/lib/systemd/system/zabbix-server.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-04-05 08:50:54 UTC; 3s ago Process: 16497 ExecStart=/usr/sbin/zabbix_server -c $CONFFILE (code=exited, status=0/SUCCESS) ...

      Finally, enable the server to start at boot time:

      • sudo systemctl enable zabbix-server

      The server is set up and connected to the database. Next, set up the web frontend.

      Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. You can follow this tutorial now to obtain a free SSL certificate for Apache on Ubuntu 18.04. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.

      Step 4 — Configuring Settings for the Zabbix Web Interface

      The web interface lets you see reports and add hosts that you want to monitor, but it needs some initial setup before you can use it. Launch your browser and go to the address http://zabbix_server_name/zabbix/. On the first screen, you will see a welcome message. Click Next step to continue.

      On the next screen, you will see the table that lists all of the prerequisites to run Zabbix.

      Prerequisites

      All of the values in this table must be OK, so verify that they are. Be sure to scroll down and look at all of the prerequisites. Once you've verified that everything is ready to go, click Next step to proceed.

      The next screen asks for database connection information.

      DB Connection

      You told the Zabbix server about your database, but the Zabbix web interface also needs access to the database to manage hosts and read data. Therefore enter the MySQL credentials you configured in Step 2 and click Next step to proceed.

      On the next screen, you can leave the options at their default values.

      Zabbix Server Details

      The Name is optional; it is used in the web interface to distinguish one server from another in case you have several monitoring servers. Click Next step to proceed.

      The next screen will show the pre-installation summary so you can confirm everything is correct.

      Summary

      Click Next step to proceed to the final screen.

      The web interface setup is complete! This process creates the configuration file /usr/share/zabbix/conf/zabbix.conf.php which you could back up and use in the future. Click Finish to proceed to the login screen. The default user is Admin and the password is zabbix.

      Before you log in, set up the Zabbix agent on your second Ubuntu server.

      Step 5 — Installing and Configuring the Zabbix Agent

      Now you need to configure the agent software that will send monitoring data to the Zabbix server.

      Log in to the second Ubuntu server:

      • ssh sammy@second_ubuntu_server_ip_address

      Then, just like on the Zabbix server, run the following commands to install the repository configuration package:

      • wget https://repo.zabbix.com/zabbix/4.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.2-1+bionic_all.deb
      • sudo dpkg -i zabbix-release_4.2-1+bionic_all.deb

      Next, update the package index:

      Then install the Zabbix agent:

      • sudo apt install zabbix-agent

      While Zabbix supports certificate-based encryption, setting up a certificate authority is beyond the scope of this tutorial, but you can use pre-shared keys (PSK) to secure the connection between the server and agent.

      First, generate a PSK:

      • sudo sh -c "openssl rand -hex 32 > /etc/zabbix/zabbix_agentd.psk"

      Show the key so you can copy it somewhere. You will need it to configure the host.

      • cat /etc/zabbix/zabbix_agentd.psk

      The key will look something like this:

      Output

      12eb854dea38ac9ee7d1ded2d74cee6262b0a56710f6946f7913d674ab82cdd4

      Now edit the Zabbix agent settings to set up its secure connection to the Zabbix server. Open the agent configuration file in your text editor:

      • sudo nano /etc/zabbix/zabbix_agentd.conf

      Each setting within this file is documented via informative comments throughout the file, but you only need to edit some of them.

      First you have to edit the IP address of the Zabbix server. Find the following section:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: Server
      #       List of comma delimited IP addresses (or hostnames) of Zabbix servers.
      #       Incoming connections will be accepted only from the hosts listed here.
      #       If IPv6 support is enabled then '127.0.0.1', '::127.0.0.1', '::ffff:127.0.0.1' are treated equally.
      #
      # Mandatory: no
      # Default:
      # Server=
      
      Server=127.0.0.1
      ...
      

      Change the default value to the IP of your Zabbix server:

      /etc/zabbix/zabbix_agentd.conf

      ...
      Server=zabbix_server_ip_address
      ...
      

      Next, find the section that configures the secure connection to the Zabbix server and enable pre-shared key support. Find the TLSConnect section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSConnect
      #       How the agent should connect to server or proxy. Used for active checks.
      #       Only one value can be specified:
      #               unencrypted - connect without encryption
      #               psk         - connect using TLS and a pre-shared key
      #               cert        - connect using TLS and a certificate
      #
      # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
      # Default:
      # TLSConnect=unencrypted
      ...
      

      Then add this line to configure pre-shared key support:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSConnect=psk
      ...
      

      Next, locate the TLSAccept section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSAccept
      #       What incoming connections to accept.
      #       Multiple values can be specified, separated by comma:
      #               unencrypted - accept connections without encryption
      #               psk         - accept connections secured with TLS and a pre-shared key
      #               cert        - accept connections secured with TLS and a certificate
      #
      # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
      # Default:
      # TLSAccept=unencrypted
      ...
      

      Configure incoming connections to support pre-shared keys by adding this line:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSAccept=psk
      ...
      

      Next, find the TLSPSKIdentity section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSPSKIdentity
      #       Unique, case sensitive string used to identify the pre-shared key.
      #
      # Mandatory: no
      # Default:
      # TLSPSKIdentity=
      ...
      

      Choose a unique name to identify your pre-shared key by adding this line:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSPSKIdentity=PSK 001
      ...
      

      You'll use this as the PSK ID when you add your host through the Zabbix web interface.

      Then set the option that points to your previously created pre-shared key. Locate the TLSPSKFile option:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSPSKFile
      #       Full pathname of a file containing the pre-shared key.
      #
      # Mandatory: no
      # Default:
      # TLSPSKFile=
      ...
      

      Add this line to point the Zabbix agent to your PSK file you created:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSPSKFile=/etc/zabbix/zabbix_agentd.psk
      ...
      

      Save and close the file. Now you can restart the Zabbix agent and set it to start at boot time:

      • sudo systemctl restart zabbix-agent
      • sudo systemctl enable zabbix-agent

      For good measure, check that the Zabbix agent is running properly:

      • sudo systemctl status zabbix-agent

      You will see the following status, indicating the agent is running:

      Output

      ● zabbix-agent.service - Zabbix Agent Loaded: loaded (/lib/systemd/system/zabbix-agent.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2019-04-05 09:03:04 UTC; 1s ago ...

      The agent will listen on port 10050 for connections from the server. Configure UFW to allow connections to this port:

      You can learn more about UFW in How To Set Up a Firewall with UFW on Ubuntu 18.04.

      Your agent is now ready to send data to the Zabbix server. But in order to use it, you have to link to it from the server's web console. In the next step, you will complete the configuration.

      Step 6 — Adding the New Host to the Zabbix Server

      Installing an agent on a server you want to monitor is only half of the process. Each host you want to monitor needs to be registered on the Zabbix server, which you can do through the web interface.

      Log in to the Zabbix Server web interface by navigating to the address http://zabbix_server_name/zabbix/.

      The Zabbix login screen

      When you have logged in, click on Configuration, and then Hosts in the top navigation bar. Then click the Create host button in the top right corner of the screen. This will open the host configuration page.

      Creating a host

      Adjust the Host name and IP address to reflect the host name and IP address of your second Ubuntu server, then add the host to a group. You can select an existing group, for example Linux servers, or create your own group. The host can be in multiple groups. To do this, enter the name of an existing or new group in the Groups field and select the desired value from the proposed list.

      Once you've added the group, click the Templates tab.

      Adding a template to the host

      Type Template OS Linux in the Search field and then click Add to add this template to the host.

      Next, navigate to the Encryption tab. Select PSK for both Connections to host and Connections from host. Then set PSK identity to PSK 001, which is the value of the TLSPSKIdentity setting of the Zabbix agent you configured previously. Then set PSK value to the key you generated for the Zabbix agent. It's the one stored in the file /etc/zabbix/zabbix_agentd.psk on the agent machine.

      Setting up the encryption

      Finally, click the Add button at the bottom of the form to create the host.

      You will see your new host in the list. Wait for a minute and reload the page to see green labels indicating that everything is working fine and the connection is encrypted.

      Zabbix shows your new host

      If you have additional servers you need to monitor, log in to each host, install the Zabbix agent, generate a PSK, configure the agent, and add the host to the web interface following the same steps you followed to add your first host.

      The Zabbix server is now monitoring your second Ubuntu server. Now, set up email notifications to be notified about problems.

      Step 7 — Configuring Email Notifications

      Zabbix automatically supports several types of notifications: email, Jabber, SMS, etc. You can also use alternative notification methods, such as Telegram or Slack. You can see the full list of integrations here.

      The simplest communication method is email, and this tutorial will configure notifications for this media type.

      Click on Administration, and then Media types in the top navigation bar. You will see the list of all media types. Click on Email.

      Adjust the SMTP options according to the settings provided by your email service. This tutorial uses Gmail's SMTP capabilities to set up email notifications; if you would like more information about setting this up, see How To Use Google's SMTP Server.


      Note: If you use 2-Step Verification with Gmail, you need to generate an App Password for Zabbix. You don't need to remember it, you’ll only have to enter an App password once during setup. You will find instructions on how to generate this password in the Google Help Center.

      You can also choose the message format—html or plain text. Finally, click the Update button at the bottom of the form to update the email parameters.

      Setting up email

      Now, create a new user. Click on Administration, and then Users in the top navigation bar. You will see the list of users. Then click the Create user button in the top right corner of the screen. This will open the user configuration page.

      Creating a user

      Enter the new username in the Alias field and set up a new password. Next, add the user to the administrator's group. Type Zabbix administrators in the Groups field and select it from the proposed list.

      Once you've added the group, click the Media tab and click on the Add underlined link. You will see a pop-up window.

      Adding an email

      Enter your email address in the Send to field. You can leave the rest of the options at the default values. Click the Add button at the bottom to submit.

      Now navigate to the Permissions tab. Select Zabbix Super Admin from the User type drop-down menu.

      Finally, click the Add button at the bottom of the form to create the user.

      Now you need to enable notifications. Click on the Configuration tab, and then Actions in the top navigation bar. You will see a pre-configured action, which is responsible for sending notifications to all Zabbix administrators. You can review and change the settings by clicking on its name. For the purposes of this tutorial, use the default parameters. To enable the action, click on the red Disabled link in the Status column.

      Now you are ready to receive alerts. In the next step, you will generate one to test your notification setup.

      Step 8 — Generating a Test Alert

      In this step, you will generate a test alert to ensure everything is connected. By default, Zabbix keeps track of the amount of free disk space on your server. It automatically detects all disk mounts and adds the corresponding checks. This discovery is executed every hour, so you need to wait a while for the notification to be triggered.

      Create a temporary file that's large enough to trigger Zabbix's file system usage alert. To do this, log in to your second Ubuntu server if you're not already connected.

      • ssh sammy@second_ubuntu_server_ip_address

      Next, determine how much free space you have on the server. You can use the df command to find out:

      The command df will report the disk space usage of your file system, and the -h will make the output human-readable. You'll see output like the following:

      Output

      Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.2G 23G 5% /

      In this case, the free space is 23GB. Your free space may differ.

      Use the fallocate command, which allows you to pre-allocate or de-allocate space to a file, to create a file that takes up more than 80% of the available disk space. This will be enough to trigger the alert:

      • fallocate -l 20G /tmp/temp.img

      After around an hour, Zabbix will trigger an alert about the amount of free disk space and will run the action you configured, sending the notification message. You can check your inbox for the message from the Zabbix server. You will see a message like:

      Output

      Problem started at 10:37:54 on 2019.04.05 Problem name: Free disk space is less than 20% on volume / Host: Second Ubuntu server Severity: Warning Original problem ID: 34

      You can also navigate to the Monitoring tab, and then Dashboard to see the notification and its details.

      Main dashboard

      Now that you know the alerts are working, delete the temporary file you created so you can reclaim your disk space:

      After a minute Zabbix will send the recovery message and the alert will disappear from main dashboard.

      Conclusion

      In this tutorial, you learned how to set up a simple and secure monitoring solution which will help you monitor the state of your servers. It can now warn you of problems, and you have the opportunity to analyze the processes occurring in your IT infrastructure.

      To learn more about setting up monitoring infrastructure, check out How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 18.04 and How To Gather Infrastructure Metrics with Metricbeat on Ubuntu 18.04.



      Source link

      How To Install and Use ClickHouse on CentOS 7


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      ClickHouse is an open-source, column-oriented analytics database created by Yandex for OLAP and big data use cases. ClickHouse’s support for real-time query processing makes it suitable for applications that require sub-second analytical results. ClickHouse’s query language is a dialect of SQL that enables powerful declarative querying capabilities while offering familiarity and a smaller learning curve for the end user.

      Column-oriented databases store records in blocks grouped by columns instead of rows. By not loading data for columns absent in the query, column-oriented databases spend less time reading data while completing queries. As a result, these databases can compute and return results much faster than traditional row-based systems for certain workloads, such as OLAP.

      Online Analytics Processing (OLAP) systems allow for organizing large amounts of data and performing complex queries. They are capable of managing petabytes of data and returning query results quickly. In this way, OLAP is useful for work in areas like data science and business analytics.

      In this tutorial, you’ll install the ClickHouse database server and client on your machine. You’ll use the DBMS for typical tasks and optionally enable remote access from another server so that you’ll be able to connect to the database from another machine. Then you’ll test ClickHouse by modeling and querying example website-visit data.

      Prerequisites

      • One CentOS 7 server with a sudo enabled non-root user and firewall setup. You can follow the initial server setup tutorial to create the user and this tutorial to set up the firewall.
      • (Optional) A secondary CentOS 7 server with a sudo enabled non-root user and firewall setup. You can follow the initial server setup tutorial and the additional setup tutorial for the firewall.

      Step 1 — Installing ClickHouse

      In this section, you will install the ClickHouse server and client programs using yum.

      First, SSH into your server by running:

      Install the base dependencies by executing:

      • sudo yum install -y pygpgme yum-utils

      The pygpgme packages is used for adding and verifying GPG signatures while the yum-utils allows easy management of source RPMs.

      Altinity, a ClickHouse consulting firm, maintains a YUM repository that has the latest version of ClickHouse. You'll add the repository's details to securely download validated ClickHouse packages by creating the file. To check the package contents, you can inspect the sources from which they are built at this Github project.

      Create the repository details file by executing:

      • sudo vi /etc/yum.repos.d/altinity_clickhouse.repo

      Next, add the following contents to the file:

      /etc/yum.repos.d/altinity_clickhouse.repo

      [altinity_clickhouse]
      name=altinity_clickhouse
      baseurl=https://packagecloud.io/altinity/clickhouse/el/7/$basearch
      repo_gpgcheck=1
      gpgcheck=0
      enabled=1
      gpgkey=https://packagecloud.io/altinity/clickhouse/gpgkey
      sslverify=1
      sslcacert=/etc/pki/tls/certs/ca-bundle.crt
      metadata_expire=300
      
      [altinity_clickhouse-source]
      name=altinity_clickhouse-source
      baseurl=https://packagecloud.io/altinity/clickhouse/el/7/SRPMS
      repo_gpgcheck=1
      gpgcheck=0
      enabled=1
      gpgkey=https://packagecloud.io/altinity/clickhouse/gpgkey
      sslverify=1
      sslcacert=/etc/pki/tls/certs/ca-bundle.crt
      metadata_expire=300
      

      Now that you've added the repositories, enable them with the following command:

      • sudo yum -q makecache -y --disablerepo='*' --enablerepo='altinity_clickhouse'

      The -q flag tells the command to run in quiet mode. The makecache command makes available the packages specified in the --enablerepo flag.

      On execution, you'll see output similar to the following:

      Output

      Importing GPG key 0x0F6E36F6: Userid : "https://packagecloud.io/altinity/clickhouse (https://packagecloud.io/docs#gpg_signing) <support@packagecloud.io>" Fingerprint: 7001 38a9 6a20 6b22 bf28 3c06 ed26 58f3 0f6e 36f6 From : https://packagecloud.io/altinity/clickhouse/gpgkey

      The output confirms it has successfully verified and added the GPG key.

      The clickhouse-server and clickhouse-client packages will now be available for installation. Install them with:

      • sudo yum install -y clickhouse-server clickhouse-client

      You've installed the ClickHouse server and client successfully. You're now ready to start the database service and ensure that it's running correctly.

      Step 2 — Starting the Service

      The clickhouse-server package that you installed in the previous section creates a systemd service, which performs actions such as starting, stopping, and restarting the database server. systemd is an init system for Linux to initialize and manage services. In this section you'll start the service and verify that it is running successfully.

      Start the clickhouse-server service by running:

      • sudo service clickhouse-server start

      You will see output similar to the following:

      Output

      Start clickhouse-server service: Path to data directory in /etc/clickhouse-server/config.xml: /var/lib/clickhouse/ DONE

      To verify that the service is running successfully, execute:

      • sudo service clickhouse-server status

      It will print an output similar to the following which denotes that the server is running properly:

      Output

      clickhouse-server service is running

      You have successfully started the ClickHouse server and will now be able to use the clickhouse-client CLI program to connect to the server.

      Step 3 — Creating Databases and Tables

      In ClickHouse, you can create and delete databases by executing SQL statements directly in the interactive database prompt. Statements consist of commands following a particular syntax that tell the database server to perform a requested operation along with any data required. You create databases by using the CREATE DATABASE table_name syntax. To create a database, first start a client session by running the following command:

      • clickhouse-client --multiline

      This command will log you into the client prompt where you can run ClickHouse SQL statements to perform actions such as:

      • Creating, updating, and deleting databases, tables, indexes, partitions, and views.

      • Executing queries to retrieve data that is optionally filtered and grouped using various conditions.

      The --multiline flag tells the CLI to allow entering queries that span multiple lines.

      In this step, with the ClickHouse client ready for inserting data, you're going to create a database and table. For the purposes of this tutorial, you'll create a database named test, and inside that you'll create a table named visits that tracks website-visit durations.

      Now that you're inside the ClickHouse command prompt, create your test database by executing:

      You'll see the following output that shows that you have created the database:

      Output

      CREATE DATABASE test Ok. 0 rows in set. Elapsed: 0.003 sec.

      A ClickHouse table is similar to tables in other relational databases; it holds a collection of related data in a structured format. You can specify columns along with their types, add rows of data, and execute different kinds of queries on tables.

      The syntax for creating tables in ClickHouse follows this example structure:

      CREATE TABLE table_name
      (
          column_name1 column_type [options],
          column_name2 column_type [options],
          ...
      ) ENGINE = engine
      

      The table_name and column_name values can be any valid ASCII identifiers. ClickHouse supports a wide range of column types; some of the most popular are:

      • UInt64: used for storing integer values in the range 0 to 18446744073709551615.

      • Float64: used for storing floating point numbers such as 2039.23, 10.5, etc.

      • String: used for storing variable length characters. It does not require a max length attribute since it can store arbitrary lengths.

      • Date: used for storing dates that follow the YYYY-MM-DD format.

      • DateTime: used for storing dates coupled with time and follows the YYYY-MM-DD HH:MM:SS format.

      After the column definitions, you specify the engine used for the table. In ClickHouse, Engines determine the physical structure of the underlying data, the table's querying capabilities, its concurrent access modes, and support for indexes. Different engine types are suitable for different application requirements. The most commonly used and widely applicable engine type is MergeTree.

      Now that you have an overview of table creation, you'll create a table. Start by confirming the database you'll be modifying:

      You will see the following output showing that you have switched to the test database from the default database:

      Output

      USE test Ok. 0 rows in set. Elapsed: 0.001 sec.

      The remainder of this guide will assume that you are executing statements within this database's context.

      Create your visits table by running this command:

      • CREATE TABLE visits (
      • id UInt64,
      • duration Float64,
      • url String,
      • created DateTime
      • ) ENGINE = MergeTree()
      • PRIMARY KEY id
      • ORDER BY id;

      Here's a breakdown of what the command does. You create a table named visits that has four columns:

      • id: The primary key column. Similarly to other RDBMS systems, a primary key column in ClickHouse uniquely identifies a row; each row should have a unique value for this column.

      • duration: A float column used to store the duration of each visit in seconds. float columns can store decimal values such as 12.50.

      • url: A string column that stores the URL visited, such as http://example.com.

      • created: A date and time column that tracks when the visit occurred.

      After the column definitions, you specify MergeTree as the storage engine for the table. The MergeTree family of engines is recommended for production databases due to its optimized support for large real-time inserts, overall robustness, and query support. Additionally, MergeTree engines support sorting of rows by primary key, partitioning of rows, and replicating and sampling data.

      If you intend to use ClickHouse for archiving data that is not queried often or for storing temporary data, you can use the Log family of engines to optimize for that use-case.

      After the column definitions, you'll define other table-level options. The PRIMARY KEY clause sets id as the primary key column and the ORDER BY clause will store values sorted by the id column. A primary key uniquely identifies a row and is used for efficiently accessing a single row and efficient colocation of rows.

      On executing the create statement, you will see the following output:

      Output

      CREATE TABLE visits ( id UInt64, duration Float64, url String, created DateTime ) ENGINE = MergeTree() PRIMARY KEY id ORDER BY id Ok. 0 rows in set. Elapsed: 0.010 sec.

      In this section, you've created a database and a table to track website-visits data. In the next step, you'll insert data into the table, update existing data, and delete that data.

      Step 4 — Inserting, Updating, and Deleting Data and Columns

      In this step, you'll use your visits table to insert, update, and delete data. The following command is an example of the syntax for inserting rows into a ClickHouse table:

      INSERT INTO table_name VALUES (column_1_value, column_2_value, ....);
      

      Now, insert a few rows of example website-visit data into your visits table by running each of the following statements:

      • INSERT INTO visits VALUES (1, 10.5, 'http://example.com', '2019-01-01 00:01:01');
      • INSERT INTO visits VALUES (2, 40.2, 'http://example1.com', '2019-01-03 10:01:01');
      • INSERT INTO visits VALUES (3, 13, 'http://example2.com', '2019-01-03 12:01:01');
      • INSERT INTO visits VALUES (4, 2, 'http://example3.com', '2019-01-04 02:01:01');

      You'll see the following output repeated for each insert statement:

      Output

      INSERT INTO visits VALUES Ok. 1 rows in set. Elapsed: 0.004 sec.

      The output for each row shows that you've inserted it successfully into the visits table.

      Now you'll add an additional column to the visits table. When adding or deleting columns from existing tables, ClickHouse supports the ALTER syntax.

      For example, the basic syntax for adding a column to a table is as follows:

      ALTER TABLE table_name ADD COLUMN column_name column_type;
      

      Add a column named location that will store the location of the visits to a website by running the following statement:

      • ALTER TABLE visits ADD COLUMN location String;

      You'll see output similar to the following:

      Output

      ALTER TABLE visits ADD COLUMN location String Ok. 0 rows in set. Elapsed: 0.014 sec.

      The output shows that you have added the location column successfully.

      As of version 19.4.3, ClickHouse doesn't support updating and deleting individual rows of data due to implementation constraints. ClickHouse has support for bulk updates and deletes, however, and has a distinct SQL syntax for these operations to highlight their non-standard usage.

      The following syntax is an example for bulk updating rows:

      ALTER TABLE table_name UPDATE  column_1 = value_1, column_2 = value_2 ...  WHERE  filter_conditions;
      

      You'll run the following statement to update the url column of all rows that have a duration of less than 15. Enter it into the database prompt to execute:

      • ALTER TABLE visits UPDATE url = 'http://example2.com' WHERE duration < 15;

      The output of the bulk update statement will be as follows:

      Output

      ALTER TABLE visits UPDATE url = 'http://example2.com' WHERE duration < 15 Ok. 0 rows in set. Elapsed: 0.003 sec.

      The output shows that your update query completed successfully. The 0 rows in set in the output denotes that the query did not return any rows; this will be the case for any update and delete queries.

      The example syntax for bulk deleting rows is similar to updating rows and has the following structure:

      ALTER TABLE table_name DELETE WHERE filter_conditions;
      

      To test deleting data, run the following statement to remove all rows that have a duration of less than 5:

      • ALTER TABLE visits DELETE WHERE duration < 5;

      The output of the bulk delete statement will be similar to:

      Output

      ALTER TABLE visits DELETE WHERE duration < 5 Ok. 0 rows in set. Elapsed: 0.003 sec.

      The output confirms that you have deleted the rows with a duration of less than five seconds.

      To delete columns from your table, the syntax would follow this example structure:

      ALTER TABLE table_name DROP COLUMN column_name;
      

      Delete the location column you added previously by running the following:

      • ALTER TABLE visits DROP COLUMN location;

      The DROP COLUMN output confirming that you have deleted the column will be as follows:

      Output

      ALTER TABLE visits DROP COLUMN location String Ok. 0 rows in set. Elapsed: 0.010 sec.

      Now that you've successfully inserted, updated, and deleted rows and columns in your visits table, you'll move on to query data in the next step.

      Step 5 — Querying Data

      ClickHouse's query language is a custom dialect of SQL with extensions and functions suited for analytics workloads. In this step, you'll run selection and aggregation queries to retrieve data and results from your visits table.

      Selection queries allow you to retrieve rows and columns of data filtered by conditions that you specify, along with options such as the number of rows to return. You can select rows and columns of data using the SELECT syntax. The basic syntax for SELECT queries is:

      SELECT func_1(column_1), func_2(column_2) FROM table_name WHERE filter_conditions row_options;
      

      Execute the following statement to retrieve url and duration values for rows where the url is http://example.com:

      • SELECT url, duration FROM visits WHERE url = 'http://example2.com' LIMIT 2;

      You will see the following output:

      Output

      SELECT url, duration FROM visits WHERE url = 'http://example2.com' LIMIT 2 ┌─url─────────────────┬─duration─┐ │ http://example2.com │ 10.5 │ └─────────────────────┴──────────┘ ┌─url─────────────────┬─duration─┐ │ http://example2.com │ 13 │ └─────────────────────┴──────────┘ 2 rows in set. Elapsed: 0.013 sec.

      The output has returned two rows that match the conditions you specified. Now that you've selected values, you can move to executing aggregation queries.

      Aggregation queries are queries that operate on a set of values and return single output values. In analytics databases, these queries are run frequently and are well optimized by the database. Some aggregate functions supported by ClickHouse are:

      • count: returns the count of rows matching the conditions specified.

      • sum: returns the sum of selected column values.

      • avg: returns the average of selected column values.

      Some ClickHouse-specific aggregate functions include:

      • uniq: returns an approximate number of distinct rows matched.

      • topK: returns an array of the most frequent values of a specific column using an approximation algorithm.

      To demonstrate the execution of aggregation queries, you'll calculate the total duration of visits by running the sum query:

      • SELECT SUM(duration) FROM visits;

      You will see output similar to the following:

      Output

      SELECT SUM(duration) FROM visits ┌─SUM(duration)─┐ │ 63.7 │ └───────────────┘ 1 rows in set. Elapsed: 0.010 sec.

      Now, calculate the top two URLs by executing:

      • SELECT topK(2)(url) FROM visits;

      You will see output similar to the following:

      Output

      SELECT topK(2)(url) FROM visits ┌─topK(2)(url)──────────────────────────────────┐ │ ['http://example2.com','http://example1.com'] │ └───────────────────────────────────────────────┘ 1 rows in set. Elapsed: 0.010 sec.

      Now that you have successfully queried your visits table, you'll delete tables and databases in the next step.

      Step 6 — Deleting Tables and Databases

      In this section, you'll delete your visits table and test database.

      The syntax for deleting tables follows this example:

      DROP TABLE table_name;
      

      To delete the visits table, run the following statement:

      You will see the following output declaring that you've deleted the table successfully:

      Output

      DROP TABLE visits Ok. 0 rows in set. Elapsed: 0.005 sec.

      You can delete databases using the DROP database table_name syntax. To delete the test database, execute the following statement:

      The resulting output shows that you've deleted the database successfully:

      Output

      DROP DATABASE test Ok. 0 rows in set. Elapsed: 0.003 sec.

      You've deleted tables and databases in this step. Now that you've created, updated, and deleted databases, tables, and data in your ClickHouse instance, you'll enable remote access to your database server in the next section.

      Step 7 — Setting Up Firewall Rules (Optional)

      If you intend to only use ClickHouse locally with applications running on the same server, or do not have a firewall enabled on your server, you don't need to complete this section. If instead, you'll be connecting to the ClickHouse database server remotely, you should follow this step.

      Currently your server has a firewall enabled that disables your public IP address accessing all ports. You'll complete the following two steps to allow remote access:

      • Add a firewall rule allowing incoming connections to port 8123, which is the HTTP port that ClickHouse server runs.

      If you are inside the database prompt, exit it by typing CTRL+D.

      Edit the configuration file by executing:

      • sudo vi /etc/clickhouse-server/config.xml

      Then uncomment the line containing <!-- <listen_host>0.0.0.0</listen_host> -->, like the following file:

      /etc/clickhouse-server/config.xml

      
      ...
       <interserver_http_host>example.yandex.ru</interserver_http_host>
          -->
      
          <!-- Listen specified host. use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere. -->
          <!-- <listen_host>::</listen_host> -->
          <!-- Same for hosts with disabled ipv6: -->
          <listen_host>0.0.0.0</listen_host>
      
          <!-- Default values - try listen localhost on ipv4 and ipv6: -->
          <!--
          <listen_host>::1</listen_host>
          <listen_host>127.0.0.1</listen_host>
          -->
      ...
      
      

      Save the file and exit vi. For the new configuration to apply restart the service by running:

      • sudo service clickhouse-server restart

      You will see the following output from this command:

      Output

      Stop clickhouse-server service: DONE Start clickhouse-server service: Path to data directory in /etc/clickhouse-server/config.xml: /var/lib/clickhouse/ DONE

      Add the remote server's IP to zone called public:

      • sudo firewall-cmd --permanent --zone=public --add-source=second_server_ip/32

      ClickHouse's server listens on port 8123 for HTTP connections and port 9000 for connections from clickhouse-client. Allow access to both ports for your second server's IP address with the following command:

      • sudo firewall-cmd --permanent --zone=public --add-port=8123/tcp
      • sudo firewall-cmd --permanent --zone=public --add-port=9000/tcp

      You will see the following output for both commands that shows that you've enabled access to both ports:

      Output

      success

      Now that you have added the rules, reload the firewall for the changes to take effect:

      • sudo firewall-cmd --reload

      This command will output a success message as well. ClickHouse will now be accessible from the IP that you added. Feel free to add additional IPs such as your local machine's address if required.

      To verify that you can connect to the ClickHouse server from the remote machine, first follow the steps in Step 1 of this tutorial on the second server and ensure that you have the clickhouse-client installed on it.

      Now that you have logged into the second server, start a client session by executing:

      • clickhouse-client --host your_server_ip --multiline

      You will see the following output that shows that you have connected successfully to the server:

      Output

      ClickHouse client version 19.4.3. Connecting to your_server_ip:9000 as user default. Connected to ClickHouse server version 19.4.3 revision 54416. hostname 🙂

      In this step, you've enabled remote access to your ClickHouse database server by adjusting your firewall rules.

      Conclusion

      You have successfully set up a ClickHouse database instance on your server and created a database and table, added data, performed queries, and deleted the database. Within ClickHouse's documentation you can read about their benchmarks against other open-source and commercial analytics databases and general reference documents. Further features ClickHouse offers includes distributed query processing across multiple servers to improve performance and protect against data loss by storing data over different shards.



      Source link