One place for hosting & domains

      Ubuntu

      How to Install and Secure phpMyAdmin with Nginx on an Ubuntu 18.04 server


      Introduction

      While many users need the functionality of a database system like MySQL, interacting with the system solely from the MySQL command-line client requires familiarity with the SQL language, so it may not be the preferred interface for some.

      phpMyAdmin was created so that users can interact with MySQL through an intuitive web interface, running alongside a PHP development environment. In this guide, we’ll discuss how to install phpMyAdmin on top of an Nginx server, and how to configure the server for increased security.

      Note: There are important security considerations when using software like phpMyAdmin, since it runs on the database server, it deals with database credentials, and it enables a user to easily execute arbitrary SQL queries into your database. Because phpMyAdmin is a widely-deployed PHP application, it is frequently targeted for attack. We will go over some security measures you can take in this tutorial so that you can make informed decisions.

      Prerequisites

      Before you get started with this guide, you’ll need the following available to you:

      Because phpMyAdmin handles authentication using MySQL credentials, it is strongly advisable to install an SSL/TLS certificate to enable encrypted traffic between server and client. If you do not have an existing domain configured with a valid certificate, you can follow this guide on securing Nginx with Let’s Encrypt on Ubuntu 18.04.

      Warning: If you don’t have an SSL/TLS certificate installed on the server and you still want to proceed, please consider enforcing access via SSH Tunnels as explained in Step 5 of this guide.

      Once you have met these prerequisites, you can go ahead with the rest of the guide.

      Step 1 — Installing phpMyAdmin

      The first thing we need to do is install phpMyAdmin on the LEMP server. We’re going to use the default Ubuntu repositories to achieve this goal.

      Let’s start by updating the server’s package index with:

      Now you can install phpMyAdmin with:

      • sudo apt install phpmyadmin

      During the installation process, you will be prompted to choose the web server (either Apache or Lighthttp) to configure. Because we are using Nginx as web server, we shouldn't make a choice here. Press tab and then OK to advance to the next step.

      Next, you’ll be prompted whether to use dbconfig-common for configuring the application database. Select Yes. This will set up the internal database and administrative user for phpMyAdmin. You will be asked to define a new password for the phpmyadmin MySQL user. You can also leave it blank and let phpMyAdmin randomly create a password.

      The installation will now finish. For the Nginx web server to find and serve the phpMyAdmin files correctly, we’ll need to create a symbolic link from the installation files to Nginx's document root directory:

      • sudo ln -s /usr/share/phpmyadmin /var/www/html

      Your phpMyAdmin installation is now operational. To access the interface, go to your server's domain name or public IP address followed by /phpmyadmin in your web browser:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin login screen

      As mentioned before, phpMyAdmin handles authentication using MySQL credentials, which means you should use the same username and password you would normally use to connect to the database via console or via an API. If you need help creating MySQL users, check this guide on How To Manage an SQL Database.

      Note: Logging into phpMyAdmin as the root MySQL user is discouraged because it represents a significant security risk. We'll see how to disable root login in a subsequent step of this guide.

      Your phpMyAdmin installation should be completely functional at this point. However, by installing a web interface, we've exposed our MySQL database server to the outside world. Because of phpMyAdmin's popularity, and the large amounts of data it may provide access to, installations like these are common targets for attacks. In the following sections of this guide, we'll see a few different ways in which we can make our phpMyAdmin installation more secure.

      Step 2 — Changing phpMyAdmin's Default Location

      One of the most basic ways to protect your phpMyAdmin installation is by making it harder to find. Bots will scan for common paths, like phpmyadmin, pma, admin, mysql and such. Changing the interface's URL from /phpmyadmin to something non-standard will make it much harder for automated scripts to find your phpMyAdmin installation and attempt brute-force attacks.

      With our phpMyAdmin installation, we've created a symbolic link pointing to /usr/share/phpmyadmin, where the actual application files are located. To change phpMyAdmin's interface URL, we will rename this symbolic link.

      First, let's navigate to the Nginx document root directory and list the files it contains to get a better sense of the change we'll make:

      You’ll receive the following output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 phpmyadmin -> /usr/share/phpmyadmin

      The output shows that we have a symbolic link called phpmyadmin in this directory. We can change this link name to whatever we'd like. This will in turn change phpMyAdmin's access URL, which can help obscure the endpoint from bots hardcoded to search common endpoint names.

      Choose a name that obscures the purpose of the endpoint. In this guide, we'll name our endpoint /nothingtosee, but you should choose an alternate name. To accomplish this, we'll rename the link:

      • sudo mv phpmyadmin nothingtosee
      • ls -l

      After running the above commands, you’ll receive this output:

      Output

      total 8 -rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html lrwxrwxrwx 1 root root 21 Apr 8 15:36 nothingtosee -> /usr/share/phpmyadmin

      Now, if you go to the old URL, you'll get a 404 error:

      https://server_domain_or_IP/phpmyadmin
      

      phpMyAdmin 404 error

      Your phpMyAdmin interface will now be available at the new URL we just configured:

      https://server_domain_or_IP/nothingtosee
      

      phpMyAdmin login screen

      By obfuscating phpMyAdmin's real location on the server, you're securing its interface against automated scans and manual brute-force attempts.

      Step 3 — Disabling Root Login

      On MySQL as well as within regular Linux systems, the root account is a special administrative account with unrestricted access to the system. In addition to being a privileged account, it's a known login name, which makes it an obvious target for brute-force attacks. To minimize risks, we'll configure phpMyAdmin to deny any login attempts coming from the user root. This way, even if you provide valid credentials for the user root, you'll still get an "access denied" error and won't be allowed to log in.

      Because we chose to use dbconfig-common to configure and store phpMyAdmin settings, the default configuration is currently stored in the database. We'll need to create a new config.inc.php file to define our custom settings.

      Even though the PHP files for phpMyAdmin are located inside /usr/share/phpmyadmin, the application uses configuration files located at /etc/phpmyadmin. We will create a new custom settings file inside /etc/phpmyadmin/conf.d, and name it pma_secure.php:

      • sudo nano /etc/phpmyadmin/conf.d/pma_secure.php

      The following configuration file contains the necessary settings to disable passwordless logins (AllowNoPassword set to false) and root login (AllowRoot set to false):

      /etc/phpmyadmin/conf.d/pma_secure.php

      <?php
      
      # PhpMyAdmin Settings
      # This should be set to a random string of at least 32 chars
      $cfg['blowfish_secret'] = '3!#32@3sa(+=_4?),5XP_:U%%834sdfSdg43yH#{o';
      
      $i=0;
      $i++;
      
      $cfg['Servers'][$i]['auth_type'] = 'cookie';
      $cfg['Servers'][$i]['AllowNoPassword'] = false;
      $cfg['Servers'][$i]['AllowRoot'] = false;
      
      ?>
      

      Save the file when you're done editing by pressing CTRL + X then y to confirm changes and ENTER. The changes will apply automatically. If you reload the login page now and try to log in as root, you will get an Access Denied error:

      access denied

      Root login is now prohibited on your phpMyAdmin installation. This security measure will block brute-force scripts from trying to guess the root database password on your server. Moreover, it will enforce the usage of less-privileged MySQL accounts for accessing phpMyAdmin's web interface, which by itself is an important security practice.

      Step 4 — Creating an Authentication Gateway

      Hiding your phpMyAdmin installation on an unusual location might sidestep some automated bots scanning the network, but it's useless against targeted attacks. To better protect a web application with restricted access, it's generally more effective to stop attackers before they can even reach the application. This way, they'll be unable to use generic exploits and brute-force attacks to guess access credentials.

      In the specific case of phpMyAdmin, it's even more important to keep the login interface locked away. By keeping it open to the world, you're offering a brute-force platform for attackers to guess your database credentials.

      Adding an extra layer of authentication to your phpMyAdmin installation enables you to increase security. Users will be required to pass through an HTTP authentication prompt before ever seeing the phpMyAdmin login screen. Most web servers, including Nginx, provide this capability natively.

      To set this up, we first need to create a password file to store the authentication credentials. Nginx requires that passwords be encrypted using the crypt() function. The OpenSSL suite, which should already be installed on your server, includes this functionality.

      To create an encrypted password, type:

      You will be prompted to enter and confirm the password that you wish to use. The utility will then display an encrypted version of the password that will look something like this:

      Output

      O5az.RSPzd.HE

      Copy this value, as you will need to paste it into the authentication file we'll be creating.

      Now, create an authentication file. We'll call this file pma_pass and place it in the Nginx configuration directory:

      • sudo nano /etc/nginx/pma_pass

      In this file, you’ll specify the username you would like to use, followed by a colon (:), followed by the encrypted version of the password you received from the openssl passwd utility.

      We are going to name our user sammy, but you should choose a different username. The file should look like this:

      /etc/nginx/pma_pass

      sammy:O5az.RSPzd.HE
      

      Save and close the file when you're done.

      Now we're ready to modify the Nginx configuration file. For this guide, we'll use the configuration file located at /etc/nginx/sites-available/example.com. You should use the relevant Nginx configuration file for the web location where phpMyAdmin is currently hosted. Open this file in your text editor to get started:

      • sudo nano /etc/nginx/sites-available/example.com

      Locate the server block, and the location / section within it. We need to create a new location section within this block to match phpMyAdmin's current path on the server. In this guide, phpMyAdmin's location relative to the web root is /nothingtosee:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location / {
                      try_files $uri $uri/ =404;
              }
      
              location /nothingtosee {
                      # Settings for phpMyAdmin will go here
              }
      
          . . .
      }
      

      Within this block, we'll need to set up two different directives: auth_basic, which defines the message that will be displayed on the authentication prompt, and auth_basic_user_file, pointing to the file we just created. This is how your configuration file should look like when you're finished:

      /etc/nginx/sites-available/default

      server {
          . . .
      
              location /nothingtosee {
                      auth_basic "Admin Login";
                      auth_basic_user_file /etc/nginx/pma_pass;
              }
      
      
          . . .
      }
      

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      To activate the new authentication gate, you must reload the web server:

      • sudo systemctl reload nginx

      Now, if you visit the phpMyAdmin URL in your web browser, you should be prompted for the username and password you added to the pma_pass file:

      https://server_domain_or_IP/nothingtosee
      

      Nginx authentication page

      Once you enter your credentials, you'll be taken to the standard phpMyAdmin login page.

      Note: If refreshing the page does not work, you may have to clear your cache or use a different browser session if you've already been using phpMyAdmin.

      In addition to providing an extra layer of security, this gateway will help keep your MySQL logs clean of spammy authentication attempts.

      Step 5 — Setting Up Access via Encrypted Tunnels (Optional)

      For increased security, it is possible to lock down your phpMyAdmin installation to authorized hosts only. You can whitelist authorized hosts in your Nginx configuration file, so that any request coming from an IP address that is not on the list will be denied.

      Even though this feature alone can be enough in some use cases, it's not always the best long-term solution, mainly due to the fact that most people don't access the Internet from static IP addresses. As soon as you get a new IP address from your Internet provider, you'll be unable to get to the phpMyAdmin interface until you update the Nginx configuration file with your new IP address.

      For a more robust long-term solution, you can use IP-based access control to create a setup in which users will only have access to your phpMyAdmin interface if they're accessing from either an authorized IP address or localhost via SSH tunneling. We'll see how to set this up in the sections below.

      Combining IP-based access control with SSH tunneling greatly increases security because it fully blocks access coming from the public internet (except for authorized IPs), in addition to providing a secure channel between user and server through the use of encrypted tunnels.

      Setting Up IP-Based Access Control on Nginx

      On Nginx, IP-based access control can be defined in the corresponding location block of a given site, using the directives allow and deny. For instance, if we want to only allow requests coming from a given host, we should include the following two lines, in this order, inside the relevant location block for the site we would like to protect:

      allow hostname_or_IP;
      deny all;
      

      You can allow as many hosts as you want, you only need to include one allow line for each authorized host/IP inside the respective location block for the site you're protecting. The directives will be evaluated in the same order as they are listed, until a match is found or the request is finally denied due to the deny all directive.

      We'll now configure Nginx to only allow requests coming from localhost or your current IP address. First, you'll need to know the current public IP address your local machine is using to connect to the Internet. There are various ways to obtain this information; for simplicity, we're going to use the service provided by ipinfo.io. You can either open the URL https://ipinfo.io/ip in your browser, or run the following command from your local machine:

      • curl https://ipinfo.io/ip

      You should get a simple IP address as output, like this:

      Output

      203.0.113.111

      That is your current public IP address. We'll configure phpMyAdmin's location block to only allow requests coming from that IP, in addition to localhost. We'll need to edit once again the configuration block for phpMyAdmin inside /etc/nginx/sites-available/example.com.

      Open the Nginx configuration file using your command-line editor of choice:

      • sudo nano /etc/nginx/sites-available/example.com

      Because we already have an access rule within our current configuration, we need to combine it with IP-based access control using the directive satisfy all. This way, we can keep the current HTTP authentication prompt for increased security.

      This is how your phpMyAdmin Nginx configuration should look like after you're done editing:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee {
              satisfy all; #requires both conditions
      
              allow 203.0.113.111; #allow your IP
              allow 127.0.0.1; #allow localhost via SSH tunnels
              deny all; #deny all other sources
      
              auth_basic "Admin Login";
              auth_basic_user_file /etc/nginx/pma_pass;
          }
      
          . . .
      }
      

      Remember to replace nothingtosee with the actual path where phpMyAdmin can be found, and the highlighted IP address with your current public IP address.

      Save and close the file when you're done. To check if the configuration file is valid, you can run:

      The following output is expected:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      Now reload the web server so the changes take effect:

      • sudo systemctl reload nginx

      Because your IP address is explicitly listed as an authorized host, your access shouldn't be disturbed. Anyone else trying to access your phpMyAdmin installation will now get a 403 error (Forbidden):

      https://server_domain_or_IP/nothingtosee
      

      403 error

      In the next section, we'll see how to use SSH tunneling to access the web server through local requests. This way, you'll still be able to access phpMyAdmin's interface even when your IP address changes.

      Accessing phpMyAdmin Through an Encrypted Tunnel

      SSH tunneling works as a way of redirecting network traffic through encrypted channels. By running an ssh command similar to what you would use to log into a server, you can create a secure "tunnel" between your local machine and that server. All traffic coming in on a given local port can now be redirected through the encrypted tunnel and use the remote server as a proxy, before reaching out to the internet. It's similar to what happens when you use a VPN (Virtual Private Network), however SSH tunneling is much simpler to set up.

      We'll use SSH tunneling to proxy our requests to the remote web server running phpMyAdmin. By creating a tunnel between your local machine and the server where phpMyAdmin is installed, you can redirect local requests to the remote web server, and what's more important, traffic will be encrypted and requests will reach Nginx as if they're coming from localhost. This way, no matter what IP address you're connecting from, you'll be able to securely access phpMyAdmin's interface.

      Because the traffic between your local machine and the remote web server will be encrypted, this is a safe alternative for situations where you can't have an SSL/TLS certificate installed on the web server running phpMyAdmin.

      From your local machine, run this command whenever you need access to phpMyAdmin:

      • ssh user@server_domain_or_IP -L 8000:localhost:80 -L 8443:localhost:443 -N

      Let's examine each part of the command:

      • user: SSH user to connect to the server where phpMyAdmin is running
      • hostname_or_IP: SSH host where phpMyAdmin is running
      • -L 8000:localhost:80 redirects HTTP traffic on port 8000
      • -L 8443:localhost:443 redirects HTTPS traffic on port 8443
      • -N: do not execute remote commands

      Note: This command will block the terminal until interrupted with a CTRL+C, in which case it will end the SSH connection and stop the packet redirection. If you'd prefer to run this command in background mode, you can use the SSH option -f.

      Now, go to your browser and replace server_domain_or_IP with localhost:PORT, where PORT is either 8000 for HTTP or 8443 for HTTPS:

      http://localhost:8000/nothingtosee
      
      https://localhost:443/nothingtosee
      

      phpMyAdmin login screen

      Note: If you're accessing phpMyAdmin via https, you might get an alert message questioning the security of the SSL certificate. This happens because the domain name you're using (localhost) doesn't match the address registered within the certificate (domain where phpMyAdmin is actually being served). It is safe to proceed.

      All requests on localhost:8000 (HTTP) and localhost:8443 (HTTPS) are now being redirected through a secure tunnel to your remote phpMyAdmin application. Not only have you increased security by disabling public access to your phpMyAdmin, you also protected all traffic between your local computer and the remote server by using an encrypted tunnel to send and receive data.

      If you'd like to enforce the usage of SSH tunneling to anyone who wants access to your phpMyAdmin interface (including you), you can do that by removing any other authorized IPs from the Nginx configuration file, leaving 127.0.0.1 as the only allowed host to access that location. Considering nobody will be able to make direct requests to phpMyAdmin, it is safe to remove HTTP authentication in order to simplify your setup. This is how your configuration file would look like in such a scenario:

      /etc/nginx/sites-available/example.com

      server {
          . . .
      
          location /nothingtosee { 
              allow 127.0.0.1; #allow localhost only
              deny all; #deny all other sources
          }
      
          . . .
      }
      

      Once you reload Nginx's configuration with sudo systemctl reload nginx, your phpMyAdmin installation will be locked down and users will be required to use SSH tunnels in order to access phpMyAdmin's interface via redirected requests.

      Conclusion

      In this tutorial, we saw how to install phpMyAdmin on Ubuntu 18.04 running Nginx as the web server. We also covered advanced methods to secure a phpMyAdmin installation on Ubuntu, such as disabling root login, creating an extra layer of authentication, and using SSH tunneling to access a phpMyAdmin installation via local requests only.

      After completing this tutorial, you should be able to manage your MySQL databases from a reasonably secure web interface. This user interface exposes most of the functionality available via the MySQL command line. You can browse databases and schema, execute queries, and create new data sets and structures.



      Source link

      How To Build and Deploy a GraphQL Server with Node.js and MongoDB on Ubuntu 18.04


      The author selected the Wikimedia Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      GraphQL was publicly released by Facebook in 2015 as a query language for APIs that makes it easy to query and mutate data from different data collections. From a single endpoint, you can query and mutate multiple data sources with a single POST request. GraphQL solves some of the common design flaws in REST API architectures, such as situations where the endpoint returns more information than you actually need. Also, it is possible when using REST APIs you would need to send requests to multiple REST endpoints to collect all the information you require—a situation that is called the n+1 problem. An example of this would be when you want to show a users’ information, but need to collect data such as personal details and addresses from different endpoints.

      These problems don’t apply to GraphQL as it has only one endpoint, which can return data from multiple collections. The data it returns depends on the query that you send to this endpoint. In this query you define the structure of the data you want to receive, including any nested data collections. In addition to a query, you can also use a mutation to change data on a GraphQL server, and a subscription to watch for changes in the data. For more information about GraphQL and its concepts, you can visit the documentation on the official website.

      As GraphQL is a query language with a lot of flexibility, it combines especially well with document-based databases like MongoDB. Both technologies are based on hierarchical, typed schemas and are popular within the JavaScript community. Also, MongoDB’s data is stored as JSON objects, so no additional parsing is necessary on the GraphQL server.

      In this tutorial, you’ll build and deploy a GraphQL server with Node.js that can query and mutate data from a MongoDB database that is running on Ubuntu 18.04. At the end of this tutorial, you’ll be able to access data in your database by using a single endpoint, both by sending requests to the server directly through the terminal and by using the pre-made GraphiQL playground interface. With this playground you can explore the contents of the GraphQL server by sending queries, mutations, and subscriptions. Also, you can find visual representations of the schemas that are defined for this server.

      At the end of this tutorial, you’ll use the GraphiQL playground to quickly interface with your GraphQL server:

      The GraphiQL playground in action

      Prerequisites

      Before you begin this guide you’ll need the following:

      Step 1 — Setting Up the MongoDB Database

      Before creating the GraphQL server, make sure your database is configured right, has authentication enabled, and is filled with sample data. For this you need to connect to the Ubuntu 18.04 server running the MongoDB database from your command prompt. All steps in this tutorial will take place on this server.

      After you’ve established the connection, run the following command to check if MongoDB is active and running on your server:

      • sudo systemctl status mongodb

      You’ll see the following output in your terminal, indicating the MongoDB database is actively running:

      Output

      ● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2019-02-23 12:23:03 UTC; 1 months 13 days ago Docs: man:mongod(1) Main PID: 2388 (mongod) Tasks: 25 (limit: 1152) CGroup: /system.slice/mongodb.service └─2388 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

      Before creating the database where you’ll store the sample data, you need to create an admin user first, since regular users are scoped to a specific database. You can do this by executing the following command that opens the MongoDB shell:

      With the MongoDB shell you'll get direct access to the MongoDB database and can create users or databases and query data. Inside this shell, execute the following command that will add a new admin user to MongoDB. You can replace the highlighted keywords with your own username and password combination, but don't forget to write them down somewhere.

      • use admin
      • db.createUser({
      • user: "admin_username",
      • pwd: "admin_password",
      • roles: [{ role: "root", db: "admin"}]
      • })

      The first line of the preceding command selects the database called admin, which is the database where all the admin roles are stored. With the method db.createUser() you can create the actual user and define its username, password, and roles.

      Executing this command will return:

      Output

      Successfully added user: { "user" : "admin_username", "roles" : [ { "role" : "root", "db" : "admin" } ] }

      You can now close the MongoDB shell by typing exit.

      Next, log in at the MongoDB shell again, but this time with the newly created admin user:

      • mongo -u "admin_username" -p "admin_password" --authenticationDatabase "admin"

      This command will open the MongoDB shell as a specific user, where the -u flag specifies the username and the -p flag the password of that user. The extra flag --authenticationDatabase specifies that you want to log in as an admin.

      Next, you'll switch to a new database and then use the db.createUser() method to create a new user with permissions to make changes to this database. Replace the highlighted sections with your own information, making sure to write these credentials down.

      Run the following command in the MongoDB shell:

      • use database_name
      • db.createUser({
      • user: "username",
      • pwd: "password",
      • roles: ["readWrite"]
      • })

      This will return the following:

      Output

      Successfully added user: { "user" : "username", "roles" : ["readWrite"] }

      After creating the database and user, fill this database with sample data that can be queried by the GraphQL server later on in this tutorial. For this, you can use the bios collection sample from the MongoDB website. By executing the commands in the following code snippet you'll insert a smaller version of this bios collection dataset into your database. You can replace the highlighted sections with your own information, but for the purposes of this tutorial, name the collection bios:

      • db.bios.insertMany([
      • {
      • "_id" : 1,
      • "name" : {
      • "first" : "John",
      • "last" : "Backus"
      • },
      • "birth" : ISODate("1924-12-03T05:00:00Z"),
      • "death" : ISODate("2007-03-17T04:00:00Z"),
      • "contribs" : [
      • "Fortran",
      • "ALGOL",
      • "Backus-Naur Form",
      • "FP"
      • ],
      • "awards" : [
      • {
      • "award" : "W.W. McDowell Award",
      • "year" : 1967,
      • "by" : "IEEE Computer Society"
      • },
      • {
      • "award" : "National Medal of Science",
      • "year" : 1975,
      • "by" : "National Science Foundation"
      • },
      • {
      • "award" : "Turing Award",
      • "year" : 1977,
      • "by" : "ACM"
      • },
      • {
      • "award" : "Draper Prize",
      • "year" : 1993,
      • "by" : "National Academy of Engineering"
      • }
      • ]
      • },
      • {
      • "_id" : ObjectId("51df07b094c6acd67e492f41"),
      • "name" : {
      • "first" : "John",
      • "last" : "McCarthy"
      • },
      • "birth" : ISODate("1927-09-04T04:00:00Z"),
      • "death" : ISODate("2011-12-24T05:00:00Z"),
      • "contribs" : [
      • "Lisp",
      • "Artificial Intelligence",
      • "ALGOL"
      • ],
      • "awards" : [
      • {
      • "award" : "Turing Award",
      • "year" : 1971,
      • "by" : "ACM"
      • },
      • {
      • "award" : "Kyoto Prize",
      • "year" : 1988,
      • "by" : "Inamori Foundation"
      • },
      • {
      • "award" : "National Medal of Science",
      • "year" : 1990,
      • "by" : "National Science Foundation"
      • }
      • ]
      • }
      • ]);

      This code block is an array consisting of multiple objects that contain information about successful scientists from the past. After running these commands to enter this collection into your database, you'll receive the following message indicating the data was added:

      Output

      { "acknowledged" : true, "insertedIds" : [ 1, ObjectId("51df07b094c6acd67e492f41") ] }

      After seeing the success message, you can close the MongoDB shell by typing exit. Next, configure the MongoDB installation to have authorization enabled so only authenticated users can access the data. To edit the configuration of the MongoDB installation, open the file containing the settings for this installation:

      • sudo nano /etc/mongodb.conf

      Uncomment the highlighted line in the following code to enable authorization:

      /etc/mongodb.conf

      ...
      # Turn on/off security.  Off is currently the default
      #noauth = true
      auth = true
      ...
      

      In order to make these changes active, restart MongoDB by running:

      • sudo systemctl restart mongodb

      Make sure the database is running again by executing the command:

      • sudo systemctl status mongodb

      This will yield output similar to the following:

      Output

      ● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2019-02-23 12:23:03 UTC; 1 months 13 days ago Docs: man:mongod(1) Main PID: 2388 (mongod) Tasks: 25 (limit: 1152) CGroup: /system.slice/mongodb.service └─2388 /usr/bin/mongod --unixSocketPrefix=/run/mongodb --config /etc/mongodb.conf

      To make sure that your user can connect to the database you just created, try opening the MongoDB shell as an authenticated user with the command:

      • mongo -u "username" -p "password" --authenticationDatabase "database_name"

      This uses the same flags as before, only this time the --authenticationDatabase is set to the database you've created and filled with the sample data.

      Now you've successfully added an admin user and another user that has read/write access to the database with the sample data. Also, the database has authorization enabled meaning you need a username and password to access it. In the next step you'll create the GraphQL server that will be connected to this database later in the tutorial.

      Step 2 — Creating the GraphQL Server

      With the database configured and filled with sample data, it's time to create a GraphQL server that can query and mutate this data. For this you'll use Express and express-graphql, which both run on Node.js. Express is a lightweight framework to quickly create Node.js HTTP servers, and express-graphql provides middleware to make it possible to quickly build GraphQL servers.

      The first step is to make sure your machine is up to date:

      Next, install Node.js on your server by running the following commands. Together with Node.js you'll also install npm, a package manager for JavaScript that runs on Node.js.

      • sudo apt install nodejs npm

      After following the installation process, check if the Node.js version you've just installed is v8.10.0 or higher:

      This will return the following:

      Output

      v8.10.0

      To initialize a new JavaScript project, run the following commands on the server as a sudo user, and replace the highlighted keywords with a name for your project.

      First move into the root directory of your server:

      Once there, create a new directory named after your project:

      Move into this directory:

      Finally, initialize a new npm package with the following command:

      After running npm init -y you'll receive a success message that the following package.json file was created:

      Output

      Wrote to /home/username/project_name/package.json: { "name": "project_name", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "keywords": [], "author": "", "license": "ISC" }

      Note: You can also execute npm init without the -y flag, after which you would answer multiple questions to set up the project name, author, etc. You can enter the details or just press enter to proceed.

      Now that you've initialized the project, install the packages you need to set up the GraphQL server:

      • sudo npm install --save express express-graphql graphql

      Create a new file called index.js and subsequently open this file by running:

      Next, add the following code block into the newly created file to set up the GraphQL server:

      index.js

      const express = require('express');
      const graphqlHTTP = require('express-graphql');
      const { buildSchema } = require('graphql');
      
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          hello: String
        }
      `);
      
      // Provide resolver functions for your schema fields
      const resolvers = {
        hello: () => 'Hello world!'
      };
      
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      This code block consists of several parts that are all important. First you describe the schema of the data that is returned by the GraphQL API:

      index.js

      ...
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          hello: String
        }
      `);
      ...
      

      The type Query defines what queries can be executed and in which format it will return the result. As you can see, the only query defined is hello that returns data in a String format.

      The next section establishes the resolvers, where data is matched to the schemas that you can query:

      index.js

      ...
      // Provide resolver functions for your schema fields
      const resolvers = {
        hello: () => 'Hello world!'
      };
      ...
      

      These resolvers are directly linked to schemas, and return the data that matches these schemas.

      The final part of this code block initializes the GraphQL server, creates the API endpoint with Express, and describes the port on which the GraphQL endpoint is running:

      index.js

      ...
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      After you have added these lines, save and exit from index.js.

      Next, to actually run the GraphQL server you need to run the file index.js with Node.js. This can be done manually from the command line, but it's common practice to set up the package.json file to do this for you.

      Open the package.json file:

      Add the following highlighted line to this file:

      package.json

      {
        "name": "project_name",
        "version": "1.0.0",
        "description": "",
        "main": "index.js",
        "scripts": {
          "start": "node index.js",
          "test": "echo "Error: no test specified" && exit 1"
        },
        "keywords": [],
        "author": "",
        "license": "ISC"
      }
      

      Save and exit the file.

      To start the GraphQL server, execute the following command in the terminal:

      Once you run this, the terminal prompt will disappear, and a message will appear to confirm the GraphQL server is running:

      Output

      🚀 Server ready at http://localhost:4000/graphql

      If you now open up another terminal session, you can test if the GraphQL server is running by executing the following command. This sends a curl POST request with a JSON body after the --data flag that contains your GraphQL query to the local endpoint:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ hello }" }' http://localhost:4000/graphql

      This will execute the query as it's described in the GraphQL schema in your code and return data in a predictable JSON format that is equal to the data as it's returned in the resolvers:

      Output

      { "data": { "hello": "Hello world!" } }

      Note: In case the Express server crashes or gets stuck, you need to manually kill the node process that is running on the server. To kill all such processes, you can execute the following:

      After which, you can restart the GraphQL server by running:

      In this step you've created the first version of the GraphQL server that is now running on a local endpoint that can be accessed on your server. Next, you'll connect your resolvers to the MongoDB database.

      Step 3 — Connecting to the MongoDB Database

      With the GraphQL server in order, you can now set up the connection with the MongoDB database that you configured and filled with data before and create a new schema that matches this data.

      To be able to connect to MongoDB from the GraphQL server, install the JavaScript package for MongoDB from npm:

      • sudo npm install --save mongodb

      Once this has been installed, open up index.js in your text editor:

      Next, add the following highlighted code to index.js just after the imported dependencies and fill the highlighted values with your own connection details to the local MongoDB database. The username, password, and database_name are those that you created in the first step of this tutorial.

      index.js

      const express = require('express');
      const graphqlHTTP = require('express-graphql');
      const { buildSchema } = require('graphql');
      const { MongoClient } = require('mongodb');
      
      const context = () => MongoClient.connect('mongodb://username:password@localhost:27017/database_name', { useNewUrlParser: true }).then(client => client.db('database_name'));
      ...
      

      These lines add the connection to the local MongoDB database to a function called context. This context function will be available to every resolver, which is why you use this to set up database connections.

      Next, in your index.js file, add the context function to the initialization of the GraphQL server by inserting the following highlighted lines:

      index.js

      ...
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers,
        context
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      Now you can call this context function from your resolvers, and thereby read variables from the MongoDB database. If you look back to the first step of this tutorial, you can see which values are present in the database. From here, define a new GraphQL schema that matches this data structure. Overwrite the previous value for the constant schema with the following highlighted lines:

      index.js

      ...
      // Construct a schema, using GrahQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
        }
        type Bio {
          name: Name,
          title: String,
          birth: String,
          death: String,
          awards: [Award]
        }
        type Name {
          first: String,
          last: String
        },
        type Award {
          award: String,
          year: Float,
          by: String
        }
      `);
      ...
      

      The type Query has changed and now returns a collection of the new type Bio. This new type consists of several types including two other non-scalar types Name and Awards, meaning these types don't match a predefined format like String or Float. For more information on defining GraphQL schemas you can look at the documentation for GraphQL.

      Also, since the resolvers tie the data from the database to the schema, update the code for the resolvers when you make changes to the schema. Create a new resolver that is called bios, which is equal to the Query that can be found in the schema and the name of the collection in the database. Note that, in this case, the name of the collection in db.collection('bios') is bios, but that this would change if you had assigned a different name to your collection.

      Add the following highlighted line to index.js:

      index.js

      ...
      // Provide resolver functions for your schema fields
      const resolvers = {
        bios: (args, context) => context().then(db => db.collection('bios').find().toArray())
      };
      ...
      

      This function will use the context function, which you can use to retrieve variables from the MongoDB database. Once you have made these changes to the code, save and exit index.js.

      In order to make these changes active, you need to restart the GraphQL server. You can stop the current process by using the keyboard combination CTRL + C and start the GraphQL server by running:

      Now you're able to use the updated schema and query the data that is inside the database. If you look at the schema, you'll see that the Query for bios returns the type Bio; this type could also return the type Name.

      To return all the first and last names for all the bios in the database, send the following request to the GraphQL server in a new terminal window:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bios { name { first, last } } }" }' http://localhost:4000/graphql

      This again will return a JSON object that matches the structure of the schema:

      Output

      {"data":{"bios":[{"name":{"first":"John","last":"Backus"}},{"name":{"first":"John","last":"McCarthy"}}]}}

      You can easily retrieve more variables from the bios by extending the query with any of the types that are described in the type for Bio.

      Also, you can retrieve a bio by specifying an id. In order to do this you need to add another type to the Query type and extend the resolvers. To do this, open index.js in your text editor:

      Add the following highlighted lines of code:

      index.js

      ...
      // Construct a schema, using GrahQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
          bio(id: Int): Bio
        }
      
        ...
      
        // Provide resolver functions for your schema fields
        const resolvers = {
          bios: (args, context) => context().then(db => db.collection('bios').find().toArray()),
          bio: (args, context) => context().then(db => db.collection('bios').findOne({ _id: args.id }))
        };
        ...
      

      Save and exit the file.

      In the terminal that is running your GraphQL server, press CTRL + C to stop it from running, then execute the following to restart it:

      In another terminal window, execute the following GraphQL request:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bio(id: 1) { name { first, last } } }" }' http://localhost:4000/graphql

      This returns the entry for the bio that has an id equal to 1:

      Output

      { "data": { "bio": { "name": { "first": "John", "last": "Backus" } } } }

      Being able to query data from a database is not the only feature of GraphQL; you can also change the data in the database. To do this, open up index.js:

      Next to the type Query you can also use the type Mutation, which allows you to mutate the database. To use this type, add it to the schema and also create input types by inserting these highlighted lines:

      index.js

      ...
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
          bio(id: Int): Bio
        }
        type Mutation {
          addBio(input: BioInput) : Bio
        }
        input BioInput {
          name: NameInput
          title: String
          birth: String
          death: String
        }
        input NameInput {
          first: String
          last: String
        }
      ...
      

      These input types define which variables can be used as inputs, which you can access in the resolvers and use to insert a new document in the database. Do this by adding the following lines to index.js:

      index.js

      ...
      // Provide resolver functions for your schema fields
      const resolvers = {
        bios: (args, context) => context().then(db => db.collection('bios').find().toArray()),
        bio: (args, context) => context().then(db => db.collection('bios').findOne({ _id: args.id })),
        addBio: (args, context) => context().then(db => db.collection('bios').insertOne({ name: args.input.name, title: args.input.title, death: args.input.death, birth: args.input.birth})).then(response => response.ops[0])
      };
      ...
      

      Just as with the resolvers for regular queries, you need to return a value from the resolver in index.js. In the case of a Mutation where the type Bio is mutated, you would return the value of the mutated bio.

      At this point, your index.js file will contain the following lines:

      index.js

      iconst express = require('express');
      const graphqlHTTP = require('express-graphql');
      const { buildSchema } = require('graphql');
      const { MongoClient } = require('mongodb');
      
      const context = () => MongoClient.connect('mongodb://username:password@localhost:27017/database_name', { useNewUrlParser: true })
        .then(client => client.db('GraphQL_Test'));
      
      // Construct a schema, using GraphQL schema language
      const schema = buildSchema(`
        type Query {
          bios: [Bio]
          bio(id: Int): Bio
        }
        type Mutation {
          addBio(input: BioInput) : Bio
        }
        input BioInput {
          name: NameInput
          title: String
          birth: String
          death: String
        }
        input NameInput {
          first: String
          last: String
        }
        type Bio {
          name: Name,
          title: String,
          birth: String,
          death: String,
          awards: [Award]
        }
        type Name {
          first: String,
          last: String
        },
        type Award {
          award: String,
          year: Float,
          by: String
        }
      `);
      
      // Provide resolver functions for your schema fields
      const resolvers = {
        bios: (args, context) =>context().then(db => db.collection('Sample_Data').find().toArray()),
        bio: (args, context) =>context().then(db => db.collection('Sample_Data').findOne({ _id: args.id })),
        addBio: (args, context) => context().then(db => db.collection('Sample_Data').insertOne({ name: args.input.name, title: args.input.title, death: args.input.death, birth: args.input.birth})).then(response => response.ops[0])
      };
      
      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers,
        context
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      Save and exit index.js.

      To check if your new mutation is working, restart the GraphQL server by pressing CTRL + c and running npm start in the terminal that is running your GraphQL server, then open another terminal session to execute the following curl request. Just as with the curl request for queries, the body in the --data flag will be sent to the GraphQL server. The highlighted parts will be added to the database:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "mutation { addBio(input: { name: { first: "test", last: "user" } }) { name { first, last } } }" }' http://localhost:4000/graphql

      This returns the following result, meaning you just inserted a new bio to the database:

      Output

      { "data": { "addBio": { "name": { "first": "test", "last": "user" } } } }

      In this step, you created the connection with MongoDB and the GraphQL server, allowing you to retrieve and mutate data from this database by executing GraphQL queries. Next, you'll expose this GraphQL server for remote access.

      Step 4 — Allowing Remote Access

      Having set up the database and the GraphQL server, you can now configure the GraphQL server to allow remote access. For this you'll use Nginx, which you set up in the prerequisite tutorial How to install Nginx on Ubuntu 18.04. This Nginx configuration can be found in the /etc/nginx/sites-available/example.com file, where example.com is the server name you added in the prerequisite tutorial.

      Open this file for editing, replacing your domain name with example.com:

      • sudo nano /etc/nginx/sites-available/example.com

      In this file you can find a server block that listens to port 80, where you've already set up a value for server_name in the prerequisite tutorial. Inside this server block, change the value for root to be the directory in which you created the code for the GraphQL server and add index.js as the index. Also, within the location block, set a proxy_pass so you can use your server's IP or a custom domain name to refer to the GraphQL server:

      /etc/nginx/sites-available/example.com

      server {
        listen 80;
        listen [::]:80;
      
        root /project_name;
        index index.js;
      
        server_name example.com;
      
        location / {
          proxy_pass http://localhost:4000/graphql;
        }
      }
      

      Make sure there are no Nginx syntax errors in this configuration file by running:

      You will receive the following output:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      When there are no errors found for the configuration file, restart Nginx:

      • sudo systemctl restart nginx

      Now you will be able to access your GraphQL server from any terminal session tab by executing and replacing example.com by either your server's IP or your custom domain name:

      • curl -X POST -H "Content-Type: application/json" --data '{ "query": "{ bios { name { first, last } } }" }' http://example.com

      This will return the same JSON object as the one of the previous step, including any additional data you might have added by using a mutation:

      Output

      {"data":{"bios":[{"name":{"first":"John","last":"Backus"}},{"name":{"first":"John","last":"McCarthy"}},{"name":{"first":"test","last":"user"}}]}}

      Now that you have made your GraphQL server accessible remotely, make sure your GraphQL server doesn't go down when you close the terminal or the server restarts. This way, your MongoDB database will be accessible via the GraphQL server whenever you want to make a request.

      To do this, use the npm package forever, a CLI tool that ensures that your command line scripts run continuously, or get restarted in case of any failure.

      Install forever with npm:

      • sudo npm install forever -g

      Once it is done installing, add it to the package.json file:

      package.json

      {
        "name": "project_name",
        "version": "1.0.0",
        "description": "",
        "main": "index.js",
        "scripts": {
          "start": "node index.js",
          "deploy": "forever start --minUptime 2000 --spinSleepTime 5 index.js",
          "test": "echo "Error: no test specified" && exit 1"
        },
        ...
      

      To start the GraphQL server with forever enabled, run the following command:

      This will start the index.js file containing the GraphQL server with forever, and ensure it will keep running with a minimum uptime of 2000 milliseconds and 5 milliseconds between every restart in case of a failure. The GraphQL server will now continuously run in the background, so you don't need to open a new tab any longer when you want to send a request to the server.

      You've now created a GraphQL server that is using MongoDB to store data and is set up to allow access from a remote server. In the next step you'll enable the GraphiQL playground, which will make it easier for you to inspect the GraphQL server.

      Step 5 — Enabling GraphiQL Playground

      Being able to send cURL requests to the GraphQL server is great, but it would be faster to have a user interface that can execute GraphQL requests immediately, especially during development. For this you can use GraphiQL, an interface supported by the package express-graphql.

      To enable GraphiQL, edit the file index.js:

      Add the following highlighted lines:

      index.js

      const app = express();
      app.use('/graphql', graphqlHTTP({
        schema,
        rootValue: resolvers,
        context,
        graphiql: true
      }));
      app.listen(4000);
      
      console.log(`🚀 Server ready at http://localhost:4000/graphql`);
      

      Save and exit the file.

      In order for these changes to become visible, make sure to stop forever by executing:

      Next, start forever again so the latest version of your GraphQL server is running:

      Open a browser at the URL http://example.com, replacing example.com with your domain name or your server IP. You will see the GraphiQL playground, where you can type GraphQL requests.

      The initial screen for the GraphiQL playground

      On the left side of this playground you can type the GraphQL queries and mutations, while the output will be shown on the right side of the playground. To test if this is working, type the following query on the left side:

      query {
        bios {
          name {
            first
            last
          }
        }
      }
      

      This will output the same result on the right side of the playground, again in JSON format:

      The GraphiQL playground in action

      Now you can send GraphQL requests using the terminal and the GraphiQL playground.

      Conclusion

      In this tutorial you've set up a MongoDB database and retrieved and mutated data from this database using GraphQL, Node.js, and Express for the server. Additionally, you configured Nginx to allow remote access to this server. Not only can you send requests to this GraphQL server directly, you can also use the GraphiQL as a visual, in-browser GraphQL interface.

      If you want to learn about GraphQL, you can watch a recording of my presentation on GraphQL at NDC {London} or visit the website howtographql.com for tutorials about GraphQL. To study how GraphQL interacts with other technologies, check out the tutorial on How to Manually Set Up a Prisma Server on Ubuntu 18.04, and for more information on building applications with MongoDB, see How To Build a Blog with Nest.js, MongoDB, and Vue.js.



      Source link

      How To Install and Configure Zabbix to Securely Monitor Remote Servers on Ubuntu 18.04


      The author selected the Open Source Initiative to receive a donation as part of the Write for DOnations program.

      Introduction

      Zabbix is open-source monitoring software for networks and applications. It offers real-time monitoring of thousands of metrics collected from servers, virtual machines, network devices, and web applications. These metrics can help you determine the current health of your IT infrastructure and detect problems with hardware or software components before customers complain. Useful information is stored in a database so you can analyze data over time and improve the quality of provided services, or plan upgrades of your equipment.

      Zabbix uses several options for collecting metrics, including agentless monitoring of user services and client-server architecture. To collect server metrics, it uses a small agent on the monitored client to gather data and send it to the Zabbix server. Zabbix supports encrypted communication between the server and connected clients, so your data is protected while it travels over insecure networks.

      The Zabbix server stores its data in a relational database powered by MySQL, PostgreSQL, or Oracle. You can also store historical data in nosql databases like Elasticsearch and TimescaleDB. Zabbix provides a web interface so you can view data and configure system settings.

      In this tutorial, you will configure two machines. One will be configured as the server, and the other as a client that you’ll monitor. The server will use a MySQL database to record monitoring data and use Apache to serve the web interface.

      Prerequisites

      To follow this tutorial, you will need:

      • Two Ubuntu 18.04 servers set up by following the Initial Server Setup Guide for Ubuntu 18.04, including a non-root user with sudo privileges and a firewall configured with ufw. On one server, you will install Zabbix; this tutorial will refer to this as the Zabbix server. It will monitor your second server; this second server will be referred to as the second Ubuntu server.

      • The server that will run the Zabbix server needs Apache, MySQL, and PHP installed. Follow this guide to configure those on your Zabbix server.

      Additionally, because the Zabbix Server is used to access valuable information about your infrastructure that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged. You can follow the Let’s Encrypt on Ubuntu 18.04 guide to obtain the free TLS/SSL certificate.

      Step 1 — Installing the Zabbix Server

      First, you need to install Zabbix on the server where you installed MySQL, Apache, and PHP. Log into this machine as your non-root user:

      • ssh sammy@zabbix_server_ip_address

      Zabbix is available in Ubuntu’s package manager, but it’s outdated, so use the official Zabbix repository to install the latest stable version. Download and install the repository configuration package:

      • wget https://repo.zabbix.com/zabbix/4.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.2-1+bionic_all.deb
      • sudo dpkg -i zabbix-release_4.2-1+bionic_all.deb

      You will see the following output:

      Output

      Selecting previously unselected package zabbix-release. (Reading database ... 61483 files and directories currently installed.) Preparing to unpack zabbix-release_4.2-1+bionic_all.deb ... Unpacking zabbix-release (4.2-1+bionicc) ... Setting up zabbix-release (4.2-1+bionicc) ...

      Update the package index so the new repository is included:

      Then install the Zabbix server and web frontend with MySQL database support:

      • sudo apt install zabbix-server-mysql zabbix-frontend-php

      Also, install the Zabbix agent, which will let you collect data about the Zabbix server status itself.

      • sudo apt install zabbix-agent

      Before you can use Zabbix, you have to set up a database to hold the data that the Zabbix server will collect from its agents. You can do this in the next step.

      Step 2 — Configuring the MySQL Database for Zabbix

      You need to create a new MySQL database and populate it with some basic information in order to make it suitable for Zabbix. You'll also create a specific user for this database so Zabbix isn't logging into MySQL with the root account.

      Log into MySQL as the root user using the root password that you set up during the MySQL server installation:

      Create the Zabbix database with UTF-8 character support:

      • create database zabbix character set utf8 collate utf8_bin;

      Then create a user that the Zabbix server will use, give it access to the new database, and set the password for the user:

      • grant all privileges on zabbix.* to zabbix@localhost identified by 'your_zabbix_mysql_password';

      Then apply these new permissions:

      That takes care of the user and the database. Exit out of the database console.

      Next you have to import the initial schema and data. The Zabbix installation provided you with a file that sets this up.

      Run the following command to set up the schema and import the data into the zabbix database. Use zcat since the data in the file is compressed.

      • zcat /usr/share/doc/zabbix-server-mysql/create.sql.gz | mysql -uzabbix -p zabbix

      Enter the password for the zabbix MySQL user that you configured when prompted.

      This command will not output any errors if it was successful. If you see the error ERROR 1045 (28000): Access denied for userzabbix@'localhost' (using password: YES) then make sure you used the password for the zabbix user and not the root user.

      In order for the Zabbix server to use this database, you need to set the database password in the Zabbix server configuration file. Open the configuration file in your preferred text editor. This tutorial will use nano:

      • sudo nano /etc/zabbix/zabbix_server.conf

      Look for the following section of the file:

      /etc/zabbix/zabbix_server.conf

      ### Option: DBPassword                           
      #       Database password. Ignored for SQLite.   
      #       Comment this line if no password is used.
      #                                                
      # Mandatory: no                                  
      # Default:                                       
      # DBPassword=
      

      These comments in the file explain how to connect to the database. You need to set the DBPassword value in the file to the password for your database user. Add this line below those comments to configure the database:

      /etc/zabbix/zabbix_server.conf

      ...
      DBPassword=your_zabbix_mysql_password
      

      Save and close zabbix_server.conf by pressing CTRL+X, followed by Y and then ENTER if you're using nano.

      That takes care of the Zabbix server configuration. Next, you will make some modifications to your PHP setup in order for the Zabbix web interface to work properly.

      Step 3 — Configuring PHP for Zabbix

      The Zabbix web interface is written in PHP and requires some special PHP server settings. The Zabbix installation process created an Apache configuration file that contains these settings. It is located in the directory /etc/zabbix and is loaded automatically by Apache. You need to make a small change to this file, so open it up with the following:

      • sudo nano /etc/zabbix/apache.conf

      The file contains PHP settings that meet the necessary requirements for the Zabbix web interface. However, the timezone setting is commented out by default. To make sure that Zabbix uses the correct time, you need to set the appropriate timezone.

      /etc/zabbix/apache.conf

      ...
      <IfModule mod_php7.c>
          php_value max_execution_time 300
          php_value memory_limit 128M
          php_value post_max_size 16M
          php_value upload_max_filesize 2M
          php_value max_input_time 300
          php_value always_populate_raw_post_data -1
          # php_value date.timezone Europe/Riga
      </IfModule>
      

      Uncomment the timezone line, highlighted in the preceding code block, and change it to your timezone. You can use this list of supported time zones to find the right one for you. Then save and close the file.

      Now restart Apache to apply these new settings.

      • sudo systemctl restart apache2

      You can now start the Zabbix server.

      • sudo systemctl start zabbix-server

      Then check whether the Zabbix server is running properly:

      • sudo systemctl status zabbix-server

      You will see the following status:

      Output

      ● zabbix-server.service - Zabbix Server Loaded: loaded (/lib/systemd/system/zabbix-server.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2019-04-05 08:50:54 UTC; 3s ago Process: 16497 ExecStart=/usr/sbin/zabbix_server -c $CONFFILE (code=exited, status=0/SUCCESS) ...

      Finally, enable the server to start at boot time:

      • sudo systemctl enable zabbix-server

      The server is set up and connected to the database. Next, set up the web frontend.

      Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. You can follow this tutorial now to obtain a free SSL certificate for Apache on Ubuntu 18.04. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.

      Step 4 — Configuring Settings for the Zabbix Web Interface

      The web interface lets you see reports and add hosts that you want to monitor, but it needs some initial setup before you can use it. Launch your browser and go to the address http://zabbix_server_name/zabbix/. On the first screen, you will see a welcome message. Click Next step to continue.

      On the next screen, you will see the table that lists all of the prerequisites to run Zabbix.

      Prerequisites

      All of the values in this table must be OK, so verify that they are. Be sure to scroll down and look at all of the prerequisites. Once you've verified that everything is ready to go, click Next step to proceed.

      The next screen asks for database connection information.

      DB Connection

      You told the Zabbix server about your database, but the Zabbix web interface also needs access to the database to manage hosts and read data. Therefore enter the MySQL credentials you configured in Step 2 and click Next step to proceed.

      On the next screen, you can leave the options at their default values.

      Zabbix Server Details

      The Name is optional; it is used in the web interface to distinguish one server from another in case you have several monitoring servers. Click Next step to proceed.

      The next screen will show the pre-installation summary so you can confirm everything is correct.

      Summary

      Click Next step to proceed to the final screen.

      The web interface setup is complete! This process creates the configuration file /usr/share/zabbix/conf/zabbix.conf.php which you could back up and use in the future. Click Finish to proceed to the login screen. The default user is Admin and the password is zabbix.

      Before you log in, set up the Zabbix agent on your second Ubuntu server.

      Step 5 — Installing and Configuring the Zabbix Agent

      Now you need to configure the agent software that will send monitoring data to the Zabbix server.

      Log in to the second Ubuntu server:

      • ssh sammy@second_ubuntu_server_ip_address

      Then, just like on the Zabbix server, run the following commands to install the repository configuration package:

      • wget https://repo.zabbix.com/zabbix/4.2/ubuntu/pool/main/z/zabbix-release/zabbix-release_4.2-1+bionic_all.deb
      • sudo dpkg -i zabbix-release_4.2-1+bionic_all.deb

      Next, update the package index:

      Then install the Zabbix agent:

      • sudo apt install zabbix-agent

      While Zabbix supports certificate-based encryption, setting up a certificate authority is beyond the scope of this tutorial, but you can use pre-shared keys (PSK) to secure the connection between the server and agent.

      First, generate a PSK:

      • sudo sh -c "openssl rand -hex 32 > /etc/zabbix/zabbix_agentd.psk"

      Show the key so you can copy it somewhere. You will need it to configure the host.

      • cat /etc/zabbix/zabbix_agentd.psk

      The key will look something like this:

      Output

      12eb854dea38ac9ee7d1ded2d74cee6262b0a56710f6946f7913d674ab82cdd4

      Now edit the Zabbix agent settings to set up its secure connection to the Zabbix server. Open the agent configuration file in your text editor:

      • sudo nano /etc/zabbix/zabbix_agentd.conf

      Each setting within this file is documented via informative comments throughout the file, but you only need to edit some of them.

      First you have to edit the IP address of the Zabbix server. Find the following section:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: Server
      #       List of comma delimited IP addresses (or hostnames) of Zabbix servers.
      #       Incoming connections will be accepted only from the hosts listed here.
      #       If IPv6 support is enabled then '127.0.0.1', '::127.0.0.1', '::ffff:127.0.0.1' are treated equally.
      #
      # Mandatory: no
      # Default:
      # Server=
      
      Server=127.0.0.1
      ...
      

      Change the default value to the IP of your Zabbix server:

      /etc/zabbix/zabbix_agentd.conf

      ...
      Server=zabbix_server_ip_address
      ...
      

      Next, find the section that configures the secure connection to the Zabbix server and enable pre-shared key support. Find the TLSConnect section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSConnect
      #       How the agent should connect to server or proxy. Used for active checks.
      #       Only one value can be specified:
      #               unencrypted - connect without encryption
      #               psk         - connect using TLS and a pre-shared key
      #               cert        - connect using TLS and a certificate
      #
      # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
      # Default:
      # TLSConnect=unencrypted
      ...
      

      Then add this line to configure pre-shared key support:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSConnect=psk
      ...
      

      Next, locate the TLSAccept section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSAccept
      #       What incoming connections to accept.
      #       Multiple values can be specified, separated by comma:
      #               unencrypted - accept connections without encryption
      #               psk         - accept connections secured with TLS and a pre-shared key
      #               cert        - accept connections secured with TLS and a certificate
      #
      # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
      # Default:
      # TLSAccept=unencrypted
      ...
      

      Configure incoming connections to support pre-shared keys by adding this line:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSAccept=psk
      ...
      

      Next, find the TLSPSKIdentity section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSPSKIdentity
      #       Unique, case sensitive string used to identify the pre-shared key.
      #
      # Mandatory: no
      # Default:
      # TLSPSKIdentity=
      ...
      

      Choose a unique name to identify your pre-shared key by adding this line:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSPSKIdentity=PSK 001
      ...
      

      You'll use this as the PSK ID when you add your host through the Zabbix web interface.

      Then set the option that points to your previously created pre-shared key. Locate the TLSPSKFile option:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSPSKFile
      #       Full pathname of a file containing the pre-shared key.
      #
      # Mandatory: no
      # Default:
      # TLSPSKFile=
      ...
      

      Add this line to point the Zabbix agent to your PSK file you created:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSPSKFile=/etc/zabbix/zabbix_agentd.psk
      ...
      

      Save and close the file. Now you can restart the Zabbix agent and set it to start at boot time:

      • sudo systemctl restart zabbix-agent
      • sudo systemctl enable zabbix-agent

      For good measure, check that the Zabbix agent is running properly:

      • sudo systemctl status zabbix-agent

      You will see the following status, indicating the agent is running:

      Output

      ● zabbix-agent.service - Zabbix Agent Loaded: loaded (/lib/systemd/system/zabbix-agent.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2019-04-05 09:03:04 UTC; 1s ago ...

      The agent will listen on port 10050 for connections from the server. Configure UFW to allow connections to this port:

      You can learn more about UFW in How To Set Up a Firewall with UFW on Ubuntu 18.04.

      Your agent is now ready to send data to the Zabbix server. But in order to use it, you have to link to it from the server's web console. In the next step, you will complete the configuration.

      Step 6 — Adding the New Host to the Zabbix Server

      Installing an agent on a server you want to monitor is only half of the process. Each host you want to monitor needs to be registered on the Zabbix server, which you can do through the web interface.

      Log in to the Zabbix Server web interface by navigating to the address http://zabbix_server_name/zabbix/.

      The Zabbix login screen

      When you have logged in, click on Configuration, and then Hosts in the top navigation bar. Then click the Create host button in the top right corner of the screen. This will open the host configuration page.

      Creating a host

      Adjust the Host name and IP address to reflect the host name and IP address of your second Ubuntu server, then add the host to a group. You can select an existing group, for example Linux servers, or create your own group. The host can be in multiple groups. To do this, enter the name of an existing or new group in the Groups field and select the desired value from the proposed list.

      Once you've added the group, click the Templates tab.

      Adding a template to the host

      Type Template OS Linux in the Search field and then click Add to add this template to the host.

      Next, navigate to the Encryption tab. Select PSK for both Connections to host and Connections from host. Then set PSK identity to PSK 001, which is the value of the TLSPSKIdentity setting of the Zabbix agent you configured previously. Then set PSK value to the key you generated for the Zabbix agent. It's the one stored in the file /etc/zabbix/zabbix_agentd.psk on the agent machine.

      Setting up the encryption

      Finally, click the Add button at the bottom of the form to create the host.

      You will see your new host in the list. Wait for a minute and reload the page to see green labels indicating that everything is working fine and the connection is encrypted.

      Zabbix shows your new host

      If you have additional servers you need to monitor, log in to each host, install the Zabbix agent, generate a PSK, configure the agent, and add the host to the web interface following the same steps you followed to add your first host.

      The Zabbix server is now monitoring your second Ubuntu server. Now, set up email notifications to be notified about problems.

      Step 7 — Configuring Email Notifications

      Zabbix automatically supports several types of notifications: email, Jabber, SMS, etc. You can also use alternative notification methods, such as Telegram or Slack. You can see the full list of integrations here.

      The simplest communication method is email, and this tutorial will configure notifications for this media type.

      Click on Administration, and then Media types in the top navigation bar. You will see the list of all media types. Click on Email.

      Adjust the SMTP options according to the settings provided by your email service. This tutorial uses Gmail's SMTP capabilities to set up email notifications; if you would like more information about setting this up, see How To Use Google's SMTP Server.


      Note: If you use 2-Step Verification with Gmail, you need to generate an App Password for Zabbix. You don't need to remember it, you’ll only have to enter an App password once during setup. You will find instructions on how to generate this password in the Google Help Center.

      You can also choose the message format—html or plain text. Finally, click the Update button at the bottom of the form to update the email parameters.

      Setting up email

      Now, create a new user. Click on Administration, and then Users in the top navigation bar. You will see the list of users. Then click the Create user button in the top right corner of the screen. This will open the user configuration page.

      Creating a user

      Enter the new username in the Alias field and set up a new password. Next, add the user to the administrator's group. Type Zabbix administrators in the Groups field and select it from the proposed list.

      Once you've added the group, click the Media tab and click on the Add underlined link. You will see a pop-up window.

      Adding an email

      Enter your email address in the Send to field. You can leave the rest of the options at the default values. Click the Add button at the bottom to submit.

      Now navigate to the Permissions tab. Select Zabbix Super Admin from the User type drop-down menu.

      Finally, click the Add button at the bottom of the form to create the user.

      Now you need to enable notifications. Click on the Configuration tab, and then Actions in the top navigation bar. You will see a pre-configured action, which is responsible for sending notifications to all Zabbix administrators. You can review and change the settings by clicking on its name. For the purposes of this tutorial, use the default parameters. To enable the action, click on the red Disabled link in the Status column.

      Now you are ready to receive alerts. In the next step, you will generate one to test your notification setup.

      Step 8 — Generating a Test Alert

      In this step, you will generate a test alert to ensure everything is connected. By default, Zabbix keeps track of the amount of free disk space on your server. It automatically detects all disk mounts and adds the corresponding checks. This discovery is executed every hour, so you need to wait a while for the notification to be triggered.

      Create a temporary file that's large enough to trigger Zabbix's file system usage alert. To do this, log in to your second Ubuntu server if you're not already connected.

      • ssh sammy@second_ubuntu_server_ip_address

      Next, determine how much free space you have on the server. You can use the df command to find out:

      The command df will report the disk space usage of your file system, and the -h will make the output human-readable. You'll see output like the following:

      Output

      Filesystem Size Used Avail Use% Mounted on /dev/vda1 25G 1.2G 23G 5% /

      In this case, the free space is 23GB. Your free space may differ.

      Use the fallocate command, which allows you to pre-allocate or de-allocate space to a file, to create a file that takes up more than 80% of the available disk space. This will be enough to trigger the alert:

      • fallocate -l 20G /tmp/temp.img

      After around an hour, Zabbix will trigger an alert about the amount of free disk space and will run the action you configured, sending the notification message. You can check your inbox for the message from the Zabbix server. You will see a message like:

      Output

      Problem started at 10:37:54 on 2019.04.05 Problem name: Free disk space is less than 20% on volume / Host: Second Ubuntu server Severity: Warning Original problem ID: 34

      You can also navigate to the Monitoring tab, and then Dashboard to see the notification and its details.

      Main dashboard

      Now that you know the alerts are working, delete the temporary file you created so you can reclaim your disk space:

      After a minute Zabbix will send the recovery message and the alert will disappear from main dashboard.

      Conclusion

      In this tutorial, you learned how to set up a simple and secure monitoring solution which will help you monitor the state of your servers. It can now warn you of problems, and you have the opportunity to analyze the processes occurring in your IT infrastructure.

      To learn more about setting up monitoring infrastructure, check out How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 18.04 and How To Gather Infrastructure Metrics with Metricbeat on Ubuntu 18.04.



      Source link