One place for hosting & domains

      Remote

      How To Set Up a Remote Database to Optimize Site Performance with MySQL on Ubuntu 18.04


      Introduction

      As your application or website grows, there may come a point where you’ve outgrown your current server setup. If you are hosting your web server and database backend on the same machine, it may be a good idea to separate these two functions so that each can operate on its own hardware and share the load of responding to your visitors’ requests.

      In this guide, we’ll go over how to configure a remote MySQL database server that your web application can connect to. We will use WordPress as an example in order to have something to work with, but the technique is widely applicable to any application backed by MySQL.

      Prerequisites

      Before beginning this tutorial, you will need:

      • Two Ubuntu 18.04 servers. Each should have a non-root user with sudo privileges and a UFW firewall enabled, as described in our Initial Server Setup with Ubuntu 18.04 tutorial. One of these servers will host your MySQL backend, and throughout this guide we will refer to it as the database server. The other will connect to your database server remotely and act as your web server; likewise, we will refer to it as the web server over the course of this guide.
      • Nginx and PHP installed on your web server. Our tutorial How To Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 18.04 will guide you through the process, but note that you should skip Step 2 of this tutorial, which focuses on installing MySQL, as you will install MySQL on your database server.
      • MySQL installed on your database server. Follow “How To Install MySQL on Ubuntu 18.04” to set this up.
      • Optionally (but strongly recommended), TLS/SSL certificates from Let’s Encrypt installed on your web server. You’ll need to purchase a domain name and have DNS records set up for your server, but the certificates themselves are free. Our guide How To Secure Nginx with Let’s Encrypt on Ubuntu 18.04 will show you how to obtain these certificates.

      Step 1 — Configuring MySQL to Listen for Remote Connections

      Having one’s data stored on a separate server is a good way to expand gracefully after hitting the performance ceiling of a one-machine configuration. It also provides the basic structure necessary to load balance and expand your infrastructure even more at a later time. After installing MySQL by following the prerequisite tutorial, you’ll need to change some configuration values to allow connections from other computers.

      Most of the MySQL server’s configuration changes can be made in the mysqld.cnf file, which is stored in the /etc/mysql/mysql.conf.d/ directory by default. Open up this file with root privileges in your preferred editor. Here, we’ll use nano:

      • sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

      This file is divided into sections denoted by labels in square brackets ([ and ]). Find the section labeled mysqld:

      /etc/mysql/mysql.conf.d/mysqld.cnf

      . . .
      [mysqld]
      . . .
      

      Within this section, look for a parameter called bind-address. This tells the database software which network address to listen for connections on.

      By default, this is set to 127.0.0.1, meaning that MySQL is configured to only look for local connections. You need to change this to reference an external IP address where your server can be reached.

      If both of your servers are in a datacenter with private networking capabilities, use your database server’s private network IP. Otherwise, you can use its public IP address:

      /etc/mysql/mysql.conf.d/mysqld.cnf

      [mysqld]
      . . .
      bind-address = db_server_ip
      

      Because you’ll connect to your database over the internet, it’s recommended that you require encrypted connections to keep your data secure. If you don’t encrypt your MySQL connection, anybody on the network could sniff sensitive information between your web and database servers. To encrypt MySQL connections, add the following line after the bind-address line you just updated:

      /etc/mysql/mysql.conf.d/mysqld.cnf

      [mysqld]
      . . .
      require_secure_transport = on
      . . .
      

      Save and close the file when you are finished. If you’re using nano, do this by pressing CTRL+X, Y, and then ENTER.

      For SSL connections to work, you will need to create some keys and certificates. MySQL comes with a command that will automatically set these up. Run the following command, which creates the necessary files. It also makes them readable by the MySQL server by specifying the UID of the mysql user:

      • sudo mysql_ssl_rsa_setup --uid=mysql

      To force MySQL to update its configuration and read the new SSL information, restart the database:

      • sudo systemctl restart mysql

      To confirm that the server is now listening on the external interface, run the following netstat command:

      • sudo netstat -plunt | grep mysqld

      Output

      tcp 0 0 db_server_ip:3306 0.0.0.0:* LISTEN 27328/mysqld

      netstat prints statistics about your server’s networking system. This output shows us that a process called mysqld is attached to the db_server_ip at port 3306, the standard MySQL port, confirming that the server is listening on the appropriate interface.

      Next, open up that port on the firewall to allow traffic through:

      Those are all the configuration changes you need to make to MySQL. Next, we will go over how to set up a database and some user profiles, one of which you will use to access the server remotely.

      Step 2 — Setting Up a WordPress Database and Remote Credentials

      Even though MySQL itself is now listening on an external IP address, there are currently no remote-enabled users or databases configured. Let's create a database for WordPress, and a pair of users that can access it.

      Begin by connecting to MySQL as the root MySQL user:

      Note: If you have password authentication enabled, as described in Step 3 of the prerequisite MySQL tutorial, you will instead need to use the following command to access the MySQL shell:

      After running this command, you will be asked for your MySQL root password and, after entering it, you'll be given a new mysql> prompt.

      From the MySQL prompt, create a database that WordPress will use. It may be helpful to give this database a recognizable name so that you can easily identify it later on. Here, we will name it wordpress:

      • CREATE DATABASE wordpress;

      Now that you've created your database, you next need to create a pair of users. We will create a local-only user as well as a remote user tied to the web server’s IP address.

      First, create your local user, wordpressuser, and make this account only match local connection attempts by using localhost in the declaration:

      • CREATE USER 'wordpressuser'@'localhost' IDENTIFIED BY 'password';

      Then grant this account full access to the wordpress database:

      • GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpressuser'@'localhost';

      This user can now do any operation on the database for WordPress, but this account cannot be used remotely, as it only matches connections from the local machine. With this in mind, create a companion account that will match connections exclusively from your web server. For this, you'll need your web server's IP address.

      Please note that you must use an IP address that utilizes the same network that you configured in your mysqld.cnf file. This means that if you specified a private networking IP in the mysqld.cnf file, you'll need to include the private IP of your web server in the following two commands. If you configured MySQL to use the public internet, you should match that with the web server's public IP address.

      • CREATE USER 'wordpressuser'@'web-server_ip' IDENTIFIED BY 'password';

      After creating your remote account, give it the same privileges as your local user:

      • GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpressuser'@'web_server_ip';

      Lastly, flush the privileges so MySQL knows to begin using them:

      Then exit the MySQL prompt by typing:

      Now that you've set up a new database and a remote-enabled user, you can move on to testing whether you're able to connect to the database from your web server.

      Step 3 — Testing Remote and Local Connections

      Before continuing, it's best to verify that you can connect to your database from both the local machine — your database server — and from your web server with each of the wordpressuser accounts.

      First, test the local connection from your database server by attempting to log in with your new account:

      • mysql -u wordpressuser -p

      When prompted, enter the password that you set up for this account.

      If you are given a MySQL prompt, then the local connection was successful. You can exit out again by typing:

      Next, log into your web server to test remote connections:

      You'll need to install some client tools for MySQL on your web server in order to access the remote database. First, update your local package cache if you haven't done so recently:

      Then install the MySQL client utilities:

      • sudo apt install mysql-client

      Following this, connect to your database server using the following syntax:

      • mysql -u wordpressuser -h db_server_ip -p

      Again, you must make sure that you are using the correct IP address for the database server. If you configured MySQL to listen on the private network, enter your database's private network IP. Otherwise, enter your database server's public IP address.

      You will be asked for the password for your wordpressuser account. After entering it, and if everything is working as expected, you will see the MySQL prompt. Verify that the connection is using SSL with the following command:

      If the connection is indeed using SSL, the SSL: line will indicate this, as shown here:

      Output

      -------------- mysql Ver 14.14 Distrib 5.7.18, for Linux (x86_64) using EditLine wrapper Connection id: 52 Current database: Current user: wordpressuser@203.0.113.111 SSL: Cipher in use is DHE-RSA-AES256-SHA Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 5.7.18-0ubuntu0.16.04.1 (Ubuntu) Protocol version: 10 Connection: 203.0.113.111 via TCP/IP Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 TCP port: 3306 Uptime: 3 hours 43 min 40 sec Threads: 1 Questions: 1858 Slow queries: 0 Opens: 276 Flush tables: 1 Open tables: 184 Queries per second avg: 0.138 --------------

      After verifying that you can connect remotely, go ahead and exit the prompt:

      With that, you've verified local access and access from the web server, but you have not verified that other connections will be refused. For an additional check, try doing the same thing from a third server for which you did not configure a specific user account in order to make sure that this other server is not granted access.

      Note that before running the following command to attempt the connection, you may have to install the MySQL client utilities as you did above:

      • mysql -u wordpressuser -h db_server_ip -p

      This should not complete successfully, and should throw back an error that looks similar to this:

      Output

      ERROR 1130 (HY000): Host '203.0.113.12' is not allowed to connect to this MySQL server

      This is expected, since you haven't created a MySQL user that's allowed to connect from this server, and also desired, since you want to be sure that your database server will deny unauthorized users access to your MySQL server.

      After successfully testing your remote connection, you can proceed to installing WordPress on your web server.

      Step 4 — Installing WordPress

      To demonstrate the capabilities of your new remote-capable MySQL server, we will go through the process of installing and configuring WordPress — the popular content management system — on your web server. This will require you to download and extract the software, configure your connection information, and then run through WordPress's web-based installation.

      On your web server, download the latest release of WordPress to your home directory:

      • cd ~
      • curl -O https://wordpress.org/latest.tar.gz

      Extract the files, which will create a directory called wordpress in your home directory:

      WordPress includes a sample configuration file which we'll use as a starting point. Make a copy of this file, removing -sample from the filename so it will be loaded by WordPress:

      • cp ~/wordpress/wp-config-sample.php ~/wordpress/wp-config.php

      When you open the file, your first order of business will be to adjust some secret keys to provide more security to your installation. WordPress provides a secure generator for these values so that you do not have to try to come up with good values on your own. These are only used internally, so it won't hurt usability to have complex, secure values here.

      To grab secure values from the WordPress secret key generator, type:

      • curl -s https://api.wordpress.org/secret-key/1.1/salt/

      This will print some keys to your output. You will add these to your wp-config.php file momentarily:

      Warning! It is important that you request your own unique values each time. Do not copy the values shown here!

      Output

      define('AUTH_KEY', 'L4|2Yh(giOtMLHg3#] DO NOT COPY THESE VALUES %G00o|te^5YG@)'); define('SECURE_AUTH_KEY', 'DCs-k+MwB90/-E(=!/ DO NOT COPY THESE VALUES +WBzDq:7U[#Wn9'); define('LOGGED_IN_KEY', '*0kP!|VS.K=;#fPMlO DO NOT COPY THESE VALUES +&[%8xF*,18c @'); define('NONCE_KEY', 'fmFPF?UJi&(j-{8=$- DO NOT COPY THESE VALUES CCZ?Q+_~1ZU~;G'); define('AUTH_SALT', '@qA7f}2utTEFNdnbEa DO NOT COPY THESE VALUES t}Vw+8=K%20s=a'); define('SECURE_AUTH_SALT', '%BW6s+d:7K?-`C%zw4 DO NOT COPY THESE VALUES 70U}PO1ejW+7|8'); define('LOGGED_IN_SALT', '-l>F:-dbcWof%4kKmj DO NOT COPY THESE VALUES 8Ypslin3~d|wLD'); define('NONCE_SALT', '4J(<`4&&F (WiK9K#] DO NOT COPY THESE VALUES ^ZikS`es#Fo:V6');

      Copy the output you received to your clipboard, then open the configuration file in your text editor:

      • nano ~/wordpress/wp-config.php

      Find the section that contains the dummy values for those settings. It will look something like this:

      /wordpress/wp-config.php

      . . .
      define('AUTH_KEY',         'put your unique phrase here');
      define('SECURE_AUTH_KEY',  'put your unique phrase here');
      define('LOGGED_IN_KEY',    'put your unique phrase here');
      define('NONCE_KEY',        'put your unique phrase here');
      define('AUTH_SALT',        'put your unique phrase here');
      define('SECURE_AUTH_SALT', 'put your unique phrase here');
      define('LOGGED_IN_SALT',   'put your unique phrase here');
      define('NONCE_SALT',       'put your unique phrase here');
      . . .
      

      Delete those lines and paste in the values you copied from the command line.

      Next, enter the connection information for your remote database. These configuration lines are at the top of the file, just above where you pasted in your keys. Remember to use the same IP address you used in your remote database test earlier:

      /wordpress/wp-config.php

      . . .
      /** The name of the database for WordPress */
      define('DB_NAME', 'wordpress');
      
      /** MySQL database username */
      define('DB_USER', 'wordpressuser');
      
      /** MySQL database password */
      define('DB_PASSWORD', 'password');
      
      /** MySQL hostname */
      define('DB_HOST', 'db_server_ip');
      . . .
      

      And finally, anywhere in the file, add the following line which tells WordPress to use an SSL connection to our MySQL database:

      /wordpress/wp-config.php

      define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL);
      

      Save and close the file.

      Next, copy the files and directories found in your ~/wordpress directory to Nginx's document root. Note that this command includes the -a flag to make sure all the existing permissions are carried over:

      • sudo cp -a ~/wordpress/* /var/www/html

      After this, the only thing left to do is modify the file ownership. Change the ownership of all the files in the document root over to www-data, Ubuntu's default web server user:

      • sudo chown -R www-data:www-data /var/www/html

      With that, WordPress is installed and you're ready to run through its web-based setup routine.

      Step 5 — Setting Up WordPress Through the Web Interface

      WordPress has a web-based setup process. As you go through it, it will ask a few questions and install all the tables it needs in your database. Here, we will go over the initial steps of setting up WordPress, which you can use as a starting point for building your own custom website that uses a remote database backend.

      Navigate to the domain name (or public IP address) associated with your web server:

      http://example.com
      

      You will see a language selection screen for the WordPress installer. Select the appropriate language and click through to the main installation screen:

      WordPress install screen

      Once you have submitted your information, you will need to log into the WordPress admin interface using the account you just created. You will then be taken to a dashboard where you can customize your new WordPress site.

      Conclusion

      By following this tutorial, you've set up a MySQL database to accept SSL-protected connections from a remote WordPress installation. The commands and techniques used in this guide are applicable to any web application written in any programming language, but the specific implementation details will differ. Refer to your application or language's database documentation for more information.



      Source link

      How To Provision and Manage Remote Docker Hosts with Docker Machine on Ubuntu 18.04


      Introduction

      [Docker Machine](/) is a tool that makes it easy to provision and manage multiple Docker hosts remotely from your personal computer. Such servers are commonly referred to as Dockerized hosts and are used to run Docker containers.

      While Docker Machine can be installed on a local or a remote system, the most common approach is to install it on your local computer (native installation or virtual machine) and use it to provision Dockerized remote servers.

      Though Docker Machine can be installed on most Linux distributions as well as macOS and Windows, in this tutorial, you’ll install it on your local machine running Ubuntu 18.04 and use it to provision Dockerized DigitalOcean Droplets. If you don’t have a local Ubuntu 18.04 machine, you can follow these instructions on any Ubuntu 18.04 server.

      Prerequisites

      To follow this tutorial, you will need the following:

      • A local machine or server running Ubuntu 18.04 with Docker installed. See How To Install and Use Docker on Ubuntu 18.04 for instructions.
      • A DigitalOcean API token. If you don’t have one, generate it using this guide. When you generate a token, be sure that it has read-write scope. That is the default, so if you do not change any options while generating it, it will have read-write capabilities.

      Step 1 — Installing Docker Machine

      In order to use Docker Machine, you must first install it locally. On Ubuntu, this means downloading a handful of scripts from the official Docker repository on GitHub.

      To download and install the Docker Machine binary, type:

      • wget https://github.com/docker/machine/releases/download/v0.15.0/docker-machine-$(uname -s)-$(uname -m)

      The name of the file should be docker-machine-Linux-x86_64. Rename it to docker-machine to make it easier to work with:

      • mv docker-machine-Linux-x86_64 docker-machine

      Make it executable:

      Move or copy it to the /usr/local/bin directory so that it will be available as a system command:

      • sudo mv docker-machine /usr/local/bin

      Check the version, which will indicate that it's properly installed:

      You'll see output similar to this, displaying the version number and build:

      Output

      docker-machine version 0.15.0, build b48dc28d

      Docker Machine is installed. Let's install some additional helper tools to make Docker Machine easier to work with.

      Step 2 — Installing Additional Docker Machine Scripts

      There are three Bash scripts in the Docker Machine GitHub repository you can install to make working with the docker and docker-machine commands easier. When installed, these scripts provide command completion and prompt customization.

      In this step, you'll install these three scripts into the /etc/bash_completion.d directory on your local machine by downloading them directly from the Docker Machine GitHub repository.

      Note: Before downloading and installing a script from the internet in a system-wide location, you should inspect the script's contents first by viewing the source URL in your browser.

      The first script allows you to see the active machine in your prompt. This comes in handy when you are working with and switching between multiple Dockerized machines. The script is called docker-machine-prompt.bash. Download it

      • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine-prompt.bash -O /etc/bash_completion.d/docker-machine-prompt.bash

      To complete the installation of this file, you'll have to modify the value for the PS1 variable in your .bashrc file. The PS1 variable is a special shell variable used to modify the Bash command prompt. Open ~/.bashrc in your editor:

      Within that file, there are three lines that begin with PS1. They should look just like these:

      ~/.bashrc

      PS1='${debian_chroot:+($debian_chroot)}[33[01;32m]u@h[33[00m]:[33[01;34m]w[33[00m]$ '
      
      ...
      
      PS1='${debian_chroot:+($debian_chroot)}u@h:w$ '
      
      ...
      
      PS1="[e]0;${debian_chroot:+($debian_chroot)}u@h: wa]$PS1"
      

      For each line, insert $(__docker_machine_ps1 " [%s]") near the end, as shown in the following example:

      ~/.bashrc

      PS1='${debian_chroot:+($debian_chroot)}[33[01;32m]u@h[33[00m]:[33[01;34m]w[33[00m]$(__docker_machine_ps1 " [%s]")$ '
      
      ...
      
      PS1='${debian_chroot:+($debian_chroot)}u@h:w$(__docker_machine_ps1 " [%s]")$ '
      
      ...
      
      PS1="[e]0;${debian_chroot:+($debian_chroot)}u@h: wa]$(__docker_machine_ps1 " [%s]")$PS1"
      

      Save and close the file.

      The second script is called docker-machine-wrapper.bash. It adds a use subcommand to the docker-machine command, making it significantly easier to switch between Docker hosts. To download it, type:

      • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine-wrapper.bash -O /etc/bash_completion.d/docker-machine-wrapper.bash

      The third script is called docker-machine.bash. It adds bash completion for docker-machine commands. Download it using:

      • sudo wget https://raw.githubusercontent.com/docker/machine/master/contrib/completion/bash/docker-machine.bash -O /etc/bash_completion.d/docker-machine.bash

      To apply the changes you've made so far, close, then reopen your terminal. If you're logged into the machine via SSH, exit the session and log in again, and you'll have command completion for the docker and docker-machine commands.

      Let's test things out by creating a new Docker host with Docker Machine.

      Step 3 — Provisioning a Dockerized Host Using Docker Machine

      Now that you have Docker and Docker Machine running on your local machine, you can provision a Dockerized Droplet on your DigitalOcean account using Docker Machine's docker-machine create command. If you've not done so already, assign your DigitalOcean API token to an environment variable:

      • export DOTOKEN=your-api-token

      NOTE: This tutorial uses DOTOKEN as the bash variable for the DO API token. The variable name does not have to be DOTOKEN, and it does not have to be in all caps.

      To make the variable permanent, put it in your ~/.bashrc file. This step is optional, but it is necessary if you want to the value to persist across shell sessions.

      Open that file with nano:

      Add this line to the file:

      ~/.bashrc

      export DOTOKEN=your-api-token
      

      To activate the variable in the current terminal session, type:

      To call the docker-machine create command successfully you must specify the driver you wish to use, as well as a machine name. The driver is the adapter for the infrastructure you're going to create. There are drivers for cloud infrastructure providers, as well as drivers for various virtualization platforms.

      We'll use the digitalocean driver. Depending on the driver you select, you'll need to provide additional options to create a machine. The digitalocean driver requires the API token (or the variable that evaluates to it) as its argument, along with the name for the machine you want to create.

      To create your first machine, type this command to create a DigitalOcean Droplet called docker-01:

      • docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-01

      You'll see this output as Docker Machine creates the Droplet:

      Output

      ... Installing Docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env ubuntu1804-docker

      Docker Machine creates an SSH key pair for the new host so it can access the server remotely. The Droplet is provisioned with an operating system and Docker is installed. When the command is complete, your Docker Droplet is up and running.

      To see the newly-created machine from the command line, type:

      The output will be similar to this, indicating that the new Docker host is running:

      Output

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Running tcp://209.97.155.178:2376 v18.06.1-ce

      Now let's look at how to specify the operating system when we create a machine.

      Step 4 — Specifying the Base OS and Droplet Options When Creating a Dockerized Host

      By default, the base operating system used when creating a Dockerized host with Docker Machine is supposed to be the latest Ubuntu LTS. However, at the time of this publication, the docker-machine create command is still using Ubuntu 16.04 LTS as the base operating system, even though Ubuntu 18.04 is the latest LTS edition. So if you need to run Ubuntu 18.04 on a recently-provisioned machine, you'll have to specify Ubuntu along with the desired version by passing the --digitalocean-image flag to the docker-machine create command.

      For example, to create a machine using Ubuntu 18.04, type:

      • docker-machine create --driver digitalocean --digitalocean-image ubuntu-18-04-x64 --digitalocean-access-token $DOTOKEN docker-ubuntu-1804

      You're not limited to a version of Ubuntu. You can create a machine using any operating system supported on DigitalOcean. For example, to create a machine using Debian 8, type:

      • docker-machine create --driver digitalocean --digitalocean-image debian-8-x64 --digitalocean-access-token $DOTOKEN docker-debian

      To provision a Dockerized host using CentOS 7 as the base OS, specify centos-7-0-x86 as the image name, like so:

      • docker-machine create --driver digitalocean --digitalocean-image centos-7-0-x64 --digitalocean-access-token $DOTOKEN docker-centos7

      The base operating system is not the only choice you have. You can also specify the size of the Droplet. By default, it is the smallest Droplet, which has 1 GB of RAM, a single CPU, and a 25 GB SSD.

      Find the size of the Droplet you want to use by looking up the corresponding slug in the DigitalOcean API documentation.

      For example, to provision a machine with 2 GB of RAM, two CPUs, and a 60 GB SSD, use the slug s-2vcpu-2gb:

      • docker-machine create --driver digitalocean --digitalocean-size s-2vcpu-2gb --digitalocean-access-token $DOTOKEN docker-03

      To see all the flags specific to creating a Docker Machine using the DigitalOcean driver, type:

      • docker-machine create --driver digitalocean -h

      Tip: If you refresh the Droplet page of your DigitalOcean dashboard, you will see the new machines you created using the docker-machine command.

      Now let's explore some of the other Docker Machine commands.

      Step 5 — Executing Additional Docker Machine Commands

      You've seen how to provision a Dockerized host using the create subcommand, and how to list the hosts available to Docker Machine using the ls subcommand. In this step, you'll learn a few more useful subcommands.

      To obtain detailed information about a Dockerized host, use the inspect subcommand, like so:

      • docker-machine inspect docker-01

      The output includes lines like the ones in the following output. The Image line reveals the version of the Linux distribution used and the Size line indicates the size slug:

      Output

      ... { "ConfigVersion": 3, "Driver": { "IPAddress": "203.0.113.71", "MachineName": "docker-01", "SSHUser": "root", "SSHPort": 22, ... "Image": "ubuntu-16-04-x64", "Size": "s-1vcpu-1gb", ... }, ---

      To print the connection configuration for a host, type:

      • docker-machine config docker-01

      The output will be similar to this:

      Output

      --tlsverify --tlscacert="/home/kamit/.docker/machine/certs/ca.pem" --tlscert="/home/kamit/.docker/machine/certs/cert.pem" --tlskey="/home/kamit/.docker/machine/certs/key.pem" -H=tcp://203.0.113.71:2376

      The last line in the output of the docker-machine config command reveals the IP address of the host, but you can also get that piece of information by typing:

      • docker-machine ip docker-01

      If you need to power down a remote host, you can use docker-machine to stop it:

      • docker-machine stop docker-01

      Verify that it is stopped:

      The output shows that the status of the machine has changed:

      Ouput

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Stopped Unknown

      To start it again, use the start subcommand:

      • docker-machine start docker-01

      Then review its status again:

      You will see that the STATE is now set Running for the host:

      Ouput

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 - digitalocean Running tcp://203.0.113.71:2376 v18.06.1-ce

      Next let's look at how to interact with the remote host using SSH.

      Step 6 — Executing Commands on a Dockerized Host via SSH

      At this point, you've been getting information about your machines, but you can do more than that. For example, you can execute native Linux commands on a Docker host by using the ssh subcommand of docker-machine from your local system. This section explains how to perform ssh commands via docker-machine as well as how to open an SSH session to a Dockerized host.

      Assuming that you've provisioned a machine with Ubuntu as the operating system, execute the following command from your local system to update the package database on the Docker host:

      • docker-machine ssh docker-01 apt-get update

      You can even apply available updates using:

      • docker-machine ssh docker-01 apt-get upgrade

      Not sure what kernel your remote Docker host is using? Type the following:

      • docker-machine ssh docker-01 uname -r

      Finally, you can log in to the remote host with the docker machine ssh command:

      docker-machine ssh docker-01
      

      You'll be logged in as the root user and you'll see something similar to the following:

      Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64)
      
       * Documentation:  https://help.ubuntu.com
       * Management:     https://landscape.canonical.com
       * Support:        https://ubuntu.com/advantage
      
        Get cloud support with Ubuntu Advantage Cloud Guest:
          http://www.ubuntu.com/business/services/cloud
      
      14 packages can be updated.
      10 updates are security updates.
      

      Log out by typing exit to return to your local machine.

      Next, we'll direct Docker's commands at our remote host.

      Step 7 — Activating a Dockerized Host

      Activating a Docker host connects your local Docker client to that system, which makes it possible to run normal docker commands on the remote system.

      First, use Docker Machine to create a new Docker host called docker-ubuntu using Ubuntu 18.04:

      • docker-machine create --driver digitalocean --digitalocean-image ubuntu-18-04-x64 --digitalocean-access-token $DOTOKEN docker-ubuntu

      To activate a Docker host, type the following command:

      • eval $(docker-machine env machine-name)

      Alternatively, you can activate it by using this command:

      • docker-machine use machine-name

      Tip When working with multiple Docker hosts, the docker-machine use command is the easiest method of switching from one to the other.

      After typing any of these commands, your prompt will change to indicate that your Docker client is pointing to the remote Docker host. It will take this form. The name of the host will be at the end of the prompt:

      username@localmachine:~ [docker-01]$
      

      Now any docker command you type at this command prompt will be executed on that remote host.

      Execute docker-machine ls again:

      You'll see an asterisk under the ACTIVE column for docker-01:

      Output

      NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS docker-01 * digitalocean Running tcp://203.0.113.71:2376 v18.06.1-ce

      To exit from the remote Docker host, type the following:

      Your prompt will no longer show the active host.

      Now let's create containers on the remote machine.

      Step 8 — Creating Docker Containers on a Remote Dockerized Host

      So far, you have provisioned a Dockerized Droplet on your DigitalOcean account and you've activated it — that is, your Docker client is pointing to it. The next logical step is to spin up containers on it. As an example, let's try running the official Nginx container.

      Use docker-machine use to select your remote machine:

      • docker-machine use docker-01

      Now execute this command to run an Nginx container on that machine:

      • docker run -d -p 8080:80 --name httpserver nginx

      In this command, we're mapping port 80 in the Nginx container to port 8080 on the Dockerized host so that we can access the default Nginx page from anywhere.

      Once the container builds, you will be able to access the default Nginx page by pointing your web browser to http://docker_machine_ip:8080.

      While the Docker host is still activated (as seen by its name in the prompt), you can list the images on that host:

      The output includes the Nginx image you just used:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE nginx latest 71c43202b8ac 3 hours ago 109MB

      You can also list the active or running containers on the host:

      If the Nginx container you ran in this step is the only active container, the output will look like this:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d3064c237372 nginx "nginx -g 'daemon of…" About a minute ago Up About a minute 0.0.0.0:8080->80/tcp httpserver

      If you intend to create containers on a remote machine, your Docker client must be pointing to it — that is, it must be the active machine in the terminal that you're using. Otherwise you'll be creating the container on your local machine. Again, let your command prompt be your guide.

      Docker Machine can create and manage remote hosts, and it can also remove them.

      Step 9 – Removing Docker Hosts

      You can use Docker Machine to remove a Docker host you've created. Use the docker-machine rm command to remove the docker-01 host you created:

      • docker-machine rm docker-01

      The Droplet is deleted along with the SSH key created for it. List the hosts again:

      This time, you won't see the docker-01 host listed in the output. And if you've only created one host, you won't see any output at all.

      Be sure to execute the command docker-machine use -u to point your local Docker daemon back to your local machine.

      Step 10 — Disabling Crash Reporting (Optional)

      By default, whenever an attempt to provision a Dockerized host using Docker Machine fails, or Docker Machine crashes, some diagnostic information is sent to a Docker account on Bugsnag. If you're not comfortable with this, you can disable the reporting by creating an empty file called no-error-report in your local computer's .docker/machine directory.

      To create the file, type:

      • touch ~/.docker/machine/no-error-report

      Check the file for error messages if provisioning fails or Docker Machine crashes.

      Conclusion

      You've installed Docker Machine and used it to provision multiple Docker hosts on DigitalOcean remotely from your local system. From here you should be able to provision as many Dockerized hosts on your DigitalOcean account as you need.

      For more on Docker Machine, visit the official documentation page. The three Bash scripts downloaded in this tutorial are hosted on this GitHub page.



      Source link

      How to Use Node.js and Github Webhooks to Keep Remote Projects in Sync


      Introduction

      When working on a project with multiple developers, it can be frustrating when one person pushes to a repository and then another begins making changes on an outdated version of the code. Mistakes like these cost time, which makes it worthwhile to set up a script to keep your repositories in sync. You can also apply this method in a production environment to push hotfixes and other changes quickly.

      While other solutions exist to complete this specific task, writing your own script is a flexible option that leaves room for customization in the future.

      GitHub lets you configure webhooks for your repositories, which are events that send HTTP requests when events happen. For example, you can use a webhook to notify you when someone creates a pull request or pushes new code.

      In this guide you will develop a Node.js server that listens for a GitHub webhook notification whenever you or someone else pushes code to GitHub. This script will automatically update a repository on a remote server with the most recent version of the code, eliminating the need to log in to a server to pull new commits.

      Prerequisites

      To complete this tutorial, you will need:

      • One Ubuntu 16.04 server set up by following the Ubuntu 16.04 initial server setup guide, including a non-root user with sudo privileges and a firewall.
      • Git installed on your local machine. You can follow the tutorial Contributing to Open Source: Getting Started with Git to install and set up Git on your computer.
      • Node.js and npm installed on the remote server using the official PPA, as explained explained in How To Install Node.js on Ubuntu 16.04. Installing the distro-stable version is sufficient as it provides us with the recommended version without any additional configuration.
      • A repository on Github that contains your project code. If you don’t have a project in mind, feel free to fork this example which we’ll use in the rest of the tutorial.

      Step 1 — Setting Up a Webhook

      We’ll start by configuring a webhook for your repository. This step is important because without it, Github doesn’t know what events to send when things happen, or where to send them. We’ll create the webhook first, and then create the server that will respond to its requests.

      Sign in to your GitHub account and navigate to the repository you wish to monitor. Click on the Settings tab in the top menu bar on your repository’s page, then click Webhooks in the left navigation menu. Click Add Webhook in the right corner and enter your account password if prompted. You’ll see a page that looks like this:

      Webhooks Page

      • In the Payload URL field, enter http://your_server_ip:8080. This is the address and port of the Node.js server we’ll write soon.
      • Change the Content type to application/json. The script we will write will expect JSON data and won’t be able to understand other data types.
      • For Secret, enter a secret password for this webhook. You’ll use this secret in your Node.js server to validate requests and make sure they came from GitHub.
      • For Which events would you like to trigger this webhook, select just the push event. We only need the push event since that is when code is updated and needs to be synced to our server.
      • Select the Active checkbox.
      • Review the fields and click Add webhook to create it.

      The ping will fail at first, but rest assured your webhook is now configured. Now let’s get the repository cloned to the server.

      Step 2 — Cloning the Repository to the Server

      Our script can update a repository, but it cannot handle setting up the repository initially, so we’ll do that now. Log in to your server:

      Ensure you're in your home directory. Then use Git to clone your repository. Be sure to replace sammy with your GitHub username and hello_hapi with the name of your Github project.

      • cd
      • git clone https://github.com/sammy/hello_hapi.git

      This will create a new directory containing your project. You'll use this directory in the next step.

      With your project cloned, you can create the webhook script.

      Step 3 — Creating the Webhook Script

      Let's create our server to listen for those webhook requests from GitHub. We'll write a Node.js script that launches a web server on port 8080. The server will listen for requests from the webhook, verify the secret we specified, and pull the latest version of the code from GitHub.

      Navigate to your home directory:

      Create a new directory for your webhook script called NodeWebhooks:

      Then navigate to the new directory:

      Create a new file called webhook.js inside of the NodeWebhooks directory.

      Add these two lines to the script:

      webhook.js

      var secret = "your_secret_here";
      var repo = "/home/sammy/hello_hapi";
      

      The first line defines a variable to hold the secret you created in Step 1 which verifies that requests come from GitHub. The second line defines a variable that holds the full path to the repository you want to update on your local disk. This should point to the repository you checked out in Step 2.

      Next, add these lines which import the http and crypto libaries into the script. We'll use these to create our web server and hash the secret so we can compare it with what we receive from GitHub:

      webhook.js

      let http = require('http');
      let crypto = require('crypto');
      

      Next, include the child_process library so you can execute shell commands from your script:

      webhook.js

      const exec = require('child_process').exec;
      

      Next, add this code to define a new web server that handles GitHub webhook requests and pulls down the new version of the code if it's an authentic request:

      webhook.js

      http.createServer(function (req, res) {
          req.on('data', function(chunk) {
              let sig = "sha1=" + crypto.createHmac('sha1', secret).update(chunk.toString()).digest('hex');
      
              if (req.headers['x-hub-signature'] == sig) {
                  exec('cd ' + repo + ' && git pull');
              }
          });
      
          res.end();
      }).listen(8080);
      

      The http.createServer() function starts a web server on port 8080 which listens for incoming requests from Github. For security purposes, we validate that the secret included in the request matches the one we specified when creating the webhook in Step 1. The secret is passed in the x-hub-signature header as an SHA1-hashed string, so we hash our secret and compare it to what GitHub sends us.

      If the request is authentic, we execute a shell command to update our local repository using git pull.

      The completed script looks like this:

      webhook.js

      const secret = "your_secret_here";
      const repo = "~/your_repo_path_here/";
      
      const http = require('http');
      const crypto = require('crypto');
      const exec = require('child_process').exec;
      
      http.createServer(function (req, res) {
          req.on('data', function(chunk) {
              let sig = "sha1=" + crypto.createHmac('sha1', secret).update(chunk.toString()).digest('hex');
      
              if (req.headers['x-hub-signature'] == sig) {
                  exec('cd ' + repo + ' && git pull');
              }
          });
      
          res.end();
      }).listen(8080);
      

      If you followed the initial server setup guide, you will need to allow this web server to communicate with the outside web by allowing traffic on port 8080:

      Now that our script is in place, let's make sure that it is working properly.

      Step 4 - Testing the Webhook

      We can test our webhook by using node to run it in the command line. Start the script and leave the process open in your terminal:

      • cd ~/NodeWebhooks
      • nodejs webhook.js

      Return to your project's page on Github.com. Click on the Settings tab in the top menu bar on your repository's page, followed by clicking Webhooks in the left navigation menu. Click Edit next to the webhook you set up in Step 1. Scroll down until you see the Recent Deliveries section, as shown in the following image:

      Edit Webhook

      Press the three dots to the far right to reveal the Redeliver button. With the node server running, click Redeliver to send the request again. Once you confirm you want to send the request, you'll see a successful response. This is indicated by a 200 OK response code after redelivering the ping.

      We can now move on to making sure our script runs in the background and starts at boot. Use CTRL+C stops the node webhook server.

      Step 5 — Installing the Webhook as a Systemd Service

      systemd is the task manager Ubuntu uses to control services. We will set up a service that will allow us to start our webhook script at boot and use systemd commands to manage it like we would with any other service.

      Start by creating a new service file:

      • sudo nano /etc/systemd/system/webhook.service

      Add the following configuration to the service file which tells systemd how to run the script. This tells Systemd where to find our node script and describes our service.

      Make sure to replace sammy with your username.

      /etc/systemd/system/webhook.service

      [Unit]
      Description=Github webhook
      After=network.target
      
      [Service]
      Environment=NODE_PORT=8080
      Type=simple
      User=sammy
      ExecStart=/usr/bin/nodejs /home/sammy/NodeWebhooks/webhook.js
      Restart=on-failure
      
      [Install]
      WantedBy=multi-user.target
      

      Enable the new service so it starts when the system boots:

      • sudo systemctl enable webhook.service

      Now start the service:

      • sudo systemctl start webhook

      Ensure the service is started:

      • sudo systemctl status webhook

      You'll see the following output indicating that the service is active:

      Output

      ● webhook.service - Github webhook Loaded: loaded (/etc/systemd/system/webhook.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-08-17 19:28:41 UTC; 6s ago Main PID: 9912 (nodejs) Tasks: 6 Memory: 7.6M CPU: 95ms CGroup: /system.slice/webhook.service └─9912 /usr/bin/nodejs /home/sammy/NodeWebhooks/webhook.js

      You are now able to push new commits to your repository and see the changes on your server.

      From your desktop machine, clone the repository:

      • git clone https://github.com/sammy/hello_hapi.git

      Make a change to one of the files in the repository. Then commit the file and push your code to GitHub.

      • git add index.js
      • git commit -m "Update index file"
      • git push origin master

      The webhook will fire and your changes will appear on your server.

      Conclusion

      You have set up a Node.js script which will automatically deploy new commits to a remote repository. You can use this process to set up additional repositories that you'd like to monitor. You could even configure it to deploy a website or application to production when you push your repository.



      Source link