One place for hosting & domains

      Ubuntu

      How To Configure a Galera Cluster with MariaDB on Ubuntu 18.04 Servers


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Clustering adds high availability to your database by distributing changes to different servers. In the event that one of the instances fails, others are quickly available to continue serving.

      Clusters come in two general configurations, active-passive and active-active. In active-passive clusters, all writes are done on a single active server and then copied to one or more passive servers that are poised to take over only in the event of an active server failure. Some active-passive clusters also allow SELECT operations on passive nodes. In an active-active cluster, every node is read-write and a change made to one is replicated to all.

      MariaDB is an open source relational database system that is fully compatible with the popular MySQL RDBMS system. You can read the official documentation for MariaDB at this page. Galera is a database clustering solution that enables you to set up multi-master clusters using synchronous replication. Galera automatically handles keeping the data on different nodes in sync while allowing you to send read and write queries to any of the nodes in the cluster. You can learn more about Galera at the official documentation page.

      In this guide, you will configure an active-active MariaDB Galera cluster. For demonstration purposes, you will configure and test three Ubuntu 18.04 Droplets that will act as nodes in the cluster. This is the smallest configurable cluster.

      Prerequisites

      To follow along, you will need a DigitalOcean account, in addition to the following:

      • Three Ubuntu 18.04 Droplets with private networking enabled, each with a non-root user with sudo privileges.

      While the steps in this tutorial have been written for and tested against DigitalOcean Droplets, much of them should also be applicable to non-DigitalOcean servers with private networking enabled.

      Step 1 — Adding the MariaDB Repositories to All Servers

      In this step, you will add the relevant MariaDB package repositories to each of your three servers so that you will be able to install the right version of MariaDB used in this tutorial. Once the repositories are updated on all three servers, you will be ready to install MariaDB.

      One thing to note about MariaDB is that it originated as a drop-in replacement for MySQL, so in many configuration files and startup scripts, you’ll see mysql rather than mariadb. For consistency’s sake, we will use mysql in this guide where either could work.

      In this tutorial, you will use MariaDB version 10.4. Since this version isn’t included in the default Ubuntu repositories, you’ll start by adding the external Ubuntu repository maintained by the MariaDB project to all three of your servers.

      Note: MariaDB is a well-respected provider, but not all external repositories are reliable. Be sure to install only from trusted sources.

      First, you’ll add the MariaDB repository key with the apt-key command, which the APT package manager will use to verify that the package is authentic:

      • sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8

      Once you have the trusted key in the database, you can add the repository with the following command:

      • sudo add-apt-repository 'deb [arch=amd64] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.4/ubuntu bionic main'

      After adding the repository, run apt update in order to include package manifests from the new repository:

      Once you have completed this step on your first server, repeat for your second and third servers.

      Now that you have successfully added the package repository on all three of your servers, you're ready to install MariaDB in the next section.

      Step 2 — Installing MariaDB on All Servers

      In this step, you will install the actual MariaDB packages on your three servers.

      Beginning with version 10.1, the MariaDB Server and MariaDB Galera Server packages are combined, so installing mariadb-server will automatically install Galera and several dependencies:

      • sudo apt install mariadb-server

      You will be asked to confirm whether you would like to proceed with the installation. Enter yes to continue with the installation.

      From MariaDB version 10.4 onwards, the root MariaDB user does not have a password by default. To set a password for the root user, start by logging into MariaDB:

      Once you're inside the MariaDB shell, change the password by executing the following statement:

      • set password = password("your_password");

      You will see the following output indicating that the password was set correctly:

      Output

      Query OK, 0 rows affected (0.001 sec)

      Exit the MariaDB shell by running the following command:

      If you would like to learn more about SQL or need a quick refresher, check out our MySQL tutorial.

      You now have all of the pieces necessary to begin configuring the cluster, but since you'll be relying on rsync in later steps, make sure it's installed:

      This will confirm that the newest version of rsync is already available or prompt you to upgrade or install it.

      Once you have installed MariaDB and set the root password on your first server, repeat these steps for your other two servers.

      Now that you have installed MariaDB successfully on each of the three servers, you can proceed to the configuration step in the next section.

      Step 3 — Configuring the First Node

      In this step you will configure your first node. Each node in the cluster needs to have a nearly identical configuration. Because of this, you will do all of the configuration on your first machine, and then copy it to the other nodes.

      By default, MariaDB is configured to check the /etc/mysql/conf.d directory to get additional configuration settings from files ending in .cnf. Create a file in this directory with all of your cluster-specific directives:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Add the following configuration into the file. The configuration specifies different cluster options, details about the current server and the other servers in the cluster, and replication-related settings. Note that the IP addresses in the configuration are the private addresses of your respective servers; replace the highlighted lines with the appropriate IP addresses.

      /etc/mysql/conf.d/galera.cnf

      [mysqld]
      binlog_format=ROW
      default-storage-engine=innodb
      innodb_autoinc_lock_mode=2
      bind-address=0.0.0.0
      
      # Galera Provider Configuration
      wsrep_on=ON
      wsrep_provider=/usr/lib/galera/libgalera_smm.so
      
      # Galera Cluster Configuration
      wsrep_cluster_name="test_cluster"
      wsrep_cluster_address="gcomm://First_Node_IP,Second_Node_IP,Third_Node_IP"
      
      # Galera Synchronization Configuration
      wsrep_sst_method=rsync
      
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      
      • The first section modifies or re-asserts MariaDB/MySQL settings that will allow the cluster to function correctly. For example, Galera won’t work with MyISAM or similar non-transactional storage engines, and mysqld must not be bound to the IP address for localhost. You can learn about the settings in more detail on the Galera Cluster system configuration page.
      • The "Galera Provider Configuration" section configures the MariaDB components that provide a WriteSet replication API. This means Galera in your case, since Galera is a wsrep (WriteSet Replication) provider. You specify the general parameters to configure the initial replication environment. This doesn't require any customization, but you can learn more about Galera configuration options.
      • The "Galera Cluster Configuration" section defines the cluster, identifying the cluster members by IP address or resolvable domain name and creating a name for the cluster to ensure that members join the correct group. You can change the wsrep_cluster_name to something more meaningful than test_cluster or leave it as-is, but you must update wsrep_cluster_address with the private IP addresses of your three servers.
      • The "Galera Synchronization Configuration" section defines how the cluster will communicate and synchronize data between members. This is used only for the state transfer that happens when a node comes online. For your initial setup, you are using rsync, because it's commonly available and does what you'll need for now.
      • The "Galera Node Configuration" section clarifies the IP address and the name of the current server. This is helpful when trying to diagnose problems in logs and for referencing each server in multiple ways. The wsrep_node_address must match the address of the machine you're on, but you can choose any name you want in order to help you identify the node in log files.

      When you are satisfied with your cluster configuration file, copy the contents into your clipboard, save and close the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

      Now that you have configured your first node successfully, you can move on to configuring the remaining nodes in the next section.

      Step 4 — Configuring the Remaining Nodes

      In this step, you will configure the remaining two nodes. On your second node, open the configuration file:

      • sudo nano /etc/mysql/conf.d/galera.cnf

      Paste in the configuration you copied from the first node, then update the Galera Node Configuration to use the IP address or resolvable domain name for the specific node you're setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:

      /etc/mysql/conf.d/galera.cnf

      . . .
      # Galera Node Configuration
      wsrep_node_address="This_Node_IP"
      wsrep_node_name="This_Node_Name"
      . . .
      

      Save and exit the file.

      Once you have completed these steps, repeat them on the third node.

      You're almost ready to bring up the cluster, but before you do, make sure that the appropriate ports are open in your firewall.

      Step 5 — Opening the Firewall on Every Server

      In this step, you will configure your firewall so that the ports required for inter-node communication are open. On every server, check the status of the firewall by running:

      In this case, only SSH is allowed through:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6)

      Since only SSH traffic is permitted in this case, you’ll need to add rules for MySQL and Galera traffic. If you tried to start the cluster, it would fail because of firewall rules.

      Galera can make use of four ports:

      • 3306 For MySQL client connections and State Snapshot Transfer that use the mysqldump method.
      • 4567 For Galera Cluster replication traffic. Multicast replication uses both UDP transport and TCP on this port.
      • 4568 For Incremental State Transfer.
      • 4444 For all other State Snapshot Transfer.

      In this example, you’ll open all four ports while you do your setup. Once you've confirmed that replication is working, you'd want to close any ports you're not actually using and restrict traffic to just servers in the cluster.

      Open the ports with the following command:

      • sudo ufw allow 3306,4567,4568,4444/tcp
      • sudo ufw allow 4567/udp

      Note: Depending on what else is running on your servers you might want to restrict access right away. The UFW Essentials: Common Firewall Rules and Commands guide can help with this.

      After you have configured your firewall on the first node, create the same firewall settings on the second and third node.

      Now that you have configured the firewalls successfully, you're ready to start the cluster in the next step.

      Step 6 — Starting the Cluster

      In this step, you will start your MariaDB cluster. To begin, you need to stop the running MariaDB service so that you can bring your cluster online.

      Stop MariaDB on All Three Servers

      Use the following command on all three servers to stop MariaDB so that you can bring them back up in a cluster:

      • sudo systemctl stop mysql

      systemctl doesn't display the outcome of all service management commands, so to be sure you succeeded, use the following command:

      • sudo systemctl status mysql

      If the last line looks something like the following, the command was successful:

      Output

      . . . Apr 26 03:34:23 galera-node-01 systemd[1]: Stopped MariaDB 10.4.4 database server.

      Once you've shut down mysql on all of the servers, you're ready to proceed.

      Bring Up the First Node

      To bring up the first node, you'll need to use a special startup script. The way you've configured your cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf file to get its initial state. Without using the galera_new_cluster script that allows systemd to pass the --wsrep-new-cluster parameter, a normal systemctl start mysql would fail because there are no nodes running for the first node to connect with.

      This command will not display any output on successful execution. When this script succeeds, the node is registered as part of the cluster, and you can see it with the following command:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that there is one node in the cluster:

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+

      On the remaining nodes, you can start mysql normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.

      Bring Up the Second Node

      Now you can bring up the second node. Start mysql:

      • sudo systemctl start mysql

      No output will be displayed on successful execution. You will see your cluster size increase as each node comes online:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output indicating that the second node has joined the cluster and that there are two nodes in total.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 2 | +--------------------+-------+

      Bring Up the Third Node

      It's now time to bring up the third node. Start mysql:

      • sudo systemctl start mysql

      Run the following command to find the cluster size:

      • mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

      You will see the following output, which indicates that the third node has joined the cluster and that the total number nodes in the cluster is three.

      Output

      +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+

      At this point, the entire cluster is online and communicating successfully. Next, you can ensure the working setup by testing replication in the next section.

      Step 7 — Testing Replication

      You've gone through the steps up to this point so that your cluster can perform replication from any node to any other node, known as active-active replication. Follow the steps below to test and see if the replication is working as expected.

      Write to the First Node

      You'll start by making database changes on your first node. The following commands will create a database called playground and a table inside of this database called equipment.

      • mysql -u root -p -e 'CREATE DATABASE playground;
      • CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
      • INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");'

      In the previous command, the CREATE DATABASE statement creates a database named playground. The CREATE statement creates a table named equipment inside the playground database having an auto-incrementing identifier column called id and other columns. The type column, quant column, and color column are defined to store the type, quantity, and color of the equipment respectively. The INSERT statement inserts an entry of type slide, quantity 2 and color blue.

      You now have one value in your table.

      Read and Write on the Second Node

      Next, look at the second node to verify that replication is working:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      If replication is working, the data you entered on the first node will be visible here on the second:

      Output

      +----+-------+-------+-------+ | id | type | quant | color | +----+-------+-------+-------+ | 1 | slide | 2 | blue | +----+-------+-------+-------+

      From this same node, you can write data to the cluster:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'

      Read and Write on the Third Node

      From the third node, you can read all of this data by querying the table again:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output showing the two rows:

      Output

      +----+-------+-------+--------+ | id | type | quant | color | +----+-------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | +----+-------+-------+--------+

      Again, you can add another value from this node:

      • mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");'

      Read on the First Node:

      Back on the first node, you can verify that your data is available everywhere:

      • mysql -u root -p -e 'SELECT * FROM playground.equipment;'

      You will see the following output which indicates that the rows are available on the first node.

      Output

      +----+--------+-------+--------+ | id | type | quant | color | +----+--------+-------+--------+ | 1 | slide | 2 | blue | | 2 | swing | 10 | yellow | | 3 | seesaw | 3 | green | +----+--------+-------+--------+

      You've verified successfully that you can write to all of the nodes and that replication is being performed properly.

      Conclusion

      At this point, you have a working three-node Galera test cluster configured. If you plan on using a Galera cluster in a production situation, it’s recommended that you begin with no fewer than five nodes.

      Before production use, you may want to take a look at some of the other state snapshot transfer (sst) agents like xtrabackup, which allows you to set up new nodes very quickly and without large interruptions to your active nodes. This does not affect the actual replication, but is a concern when nodes are being initialized.

      You might also be interested in other clustering solutions such as MySQL cluster, in which case you can check out our tutorial How To Create a Multi-Node MySQL Cluster on Ubuntu 18.04 tutorial.



      Source link

      How To Deploy and Manage Your DNS using DNSControl on Ubuntu 18.04


      The author selected the Electronic Frontier Foundation Inc to receive a donation as part of the Write for DOnations program.

      Introduction

      DNSControl is an infrastructure-as-code tool that allows you to deploy and manage your DNS zones using standard software development principles, including version control, testing, and automated deployment. DNSControl was created by Stack Exchange and is written in Go.

      Using DNSControl eliminates many of the pitfalls of manual DNS management, as zone files are stored in a programmable format. This allows you to deploy zones to multiple DNS providers simultaneously, identify syntax errors, and push out your DNS configuration automatically, reducing the risk of human error. Another common usage of DNSControl is to quickly migrate your DNS to a different provider; for example, in the event of a DDoS attack or system outage.

      In this tutorial, you’ll install and configure DNSControl, create a basic DNS configuration, and begin deploying DNS records to a live provider. As part of this tutorial, we will use DigitalOcean as the example DNS provider. If you wish to use a different provider, the setup is very similar. When you’re finished, you’ll be able to manage and test your DNS configuration in a safe, offline environment, and then automatically deploy it to production.

      Prerequisites

      Before you begin this guide you’ll need the following:

      • One Ubuntu 18.04 server set up by following the Initial Server Setup with Ubuntu 18.04, including a sudo non-root user and enabled firewall to block non-essential ports. your-server-ip refers to the IP address of the server where you’re hosting your website or domain.
      • A fully registered domain name with DNS hosted by a supported provider. This tutorial will use example.com throughout and DigitalOcean as the service provider.
      • A DigitalOcean API key (Personal Access Token) with read and write permissions. To create one, visit How to Create a Personal Access Token.

      Once you have these ready, log in to your server as your non-root user to begin.

      Step 1 — Installing DNSControl

      DNSControl is written in Go, so you’ll start this step by installing Go to your server and setting your GOPATH.

      Go is available within Ubuntu’s default software repositories, making it possible to install using conventional package management tools.

      Begin by updating the local package index to reflect any new upstream changes:

      Then, install the golang-go package:

      • sudo apt install golang-go

      After confirming the installation, apt will download and install Go and all of its required dependencies.

      Next, you'll configure the required path environment variables for Go. If you would like to know more about this, you can read this tutorial on Understanding the GOPATH. Start by editing the ~/.profile file:

      Add the following lines to the very end of your file:

      ~/.profile

      ...
      export GOPATH="$HOME/go"
      export PATH="$PATH:$GOPATH/bin"
      

      Once you have added these lines to the bottom of the file, save and close it. Then reload your profile by either logging out and back in, or sourcing the file again:

      Now you've installed and configured Go, you can install DNSControl.

      The go get command can be used to fetch a copy of the code, automatically compile it and install it into your Go directory:

      • go get github.com/StackExchange/dnscontrol

      Once this is complete, you can check the installed version to make sure that everything is working:

      Your output will look similar to the following:

      Output

      dnscontrol 0.2.8-dev

      If you see a dnscontrol: command not found error, double-check your Go path setup.

      Now that you've installed DNSControl, you can create a configuration directory and connect DNSControl to your DNS provider in order to allow it to make changes to your DNS records.

      Step 2 — Configuring DNSControl

      In this step, you'll create the required configuration directories for DNSControl, and connect it to your DNS provider so that it can begin to make live changes to your DNS records.

      Firstly, create a new directory in which you can store your DNSControl configuration, and then move into it:

      • mkdir ~/dnscontrol
      • cd ~/dnscontrol

      Note: This tutorial will focus on the initial set up of DNSControl; however for production use it is recommended to store your DNSControl configuration in a version control system (VCS) such as Git. The advantages of this include full version control, integration with CI/CD for testing, seamlessly rolling-back deployments, and so on.

      If you plan to use DNSControl to write BIND zone files, you should also create the zones directory:

      BIND zone files are a raw, standardized method for storing DNS zones/records in plain text format. They were originally used for the BIND DNS server software, but are now widely adopted as the standard method for storing DNS zones. BIND zone files produced by DNSControl are useful if you want to import them to a custom or self-hosted DNS server, or for auditing purposes.

      However, if you just want to use DNSControl to push DNS changes to a managed provider, the zones directory will not be needed.

      Next, you need to configure the creds.json file, which is what will allow DNSControl to authenticate to your DNS provider and make changes. The format of creds.json differs slightly depending on the DNS provider that you are using. Please see the Service Providers list in the official DNSControl documentation to find the configuration for your own provider.

      Create the file creds.json in the ~/dnscontrol directory:

      • cd ~/dnscontrol
      • nano creds.json

      Add the sample creds.json configuration for your DNS provider to the file. If you're using DigitalOcean as your DNS provider, you can use the following:

      ~/dnscontrol/creds.json

      {
        "digitalocean": {
          "token": "your-digitalocean-oauth-token"
        }
      }
      

      This file tells DNSControl to which DNS providers you want it to connect.

      You'll need to provide some form of authentication for your DNS provider. This is usually an API key or OAuth token, but some providers require extra information, as documented in the Service Providers list in the official DNSControl documentation.

      Warning: This token will grant access to your DNS provider account, so you should protect it as you would a password. Also, ensure that if you're using a version control system, either the file containing the token is excluded (e.g. using .gitignore), or is securely encrypted in some way.

      If you're using DigitalOcean as your DNS provider, you can use the required OAuth token in your DigitalOcean account settings that you generated as part of the prerequisites.

      If you have multiple different DNS providers—for example, for multiple domain names, or delegated DNS zones—you can define these all in the same creds.json file.

      You've set up the initial DNSControl configuration directories, and configured creds.json to allow DNSControl to authenticate to your DNS provider and make changes. Next you'll create the configuration for your DNS zones.

      Step 3 — Creating a DNS Configuration File

      In this step, you'll create an initial DNS configuration file, which will contain the DNS records for your domain name or delegated DNS zone.

      dnsconfig.js is the main DNS configuration file for DNSControl. In this file, DNS zones and their corresponding records are defined using JavaScript syntax. This is known as a DSL, or Domain Specific Language. The JavaScript DSL page in the official DNSControl documentation provides further details.

      To begin, create the DNS configuration file in the ~/dnscontrol directory:

      • cd ~/dnscontrol
      • nano dnsconfig.js

      Then, add the following sample configuration to the file:

      ~/dnscontrol/dnsconfig.js

      // Providers:
      
      var REG_NONE = NewRegistrar('none', 'NONE');
      var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
      
      // Domains:
      
      D('example.com', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
          A('@', 'your-server-ip')
      );
      

      This sample file defines a domain name or DNS zone at a particular provider, which in this case is example.com hosted by DigitalOcean. An example A record is also defined for the zone root (@), pointing to the IP of the server that you're hosting your domain/website on.

      There are three main functions that make up a basic DNSControl configuration file:

      • NewRegistrar(name, type, metadata): defines the domain registrar for your domain name. DNSControl can use this to make required changes, such as modifying the authoritative nameservers. If you only want to use DNSControl to manage your DNS zones, this can generally be left as NONE.

      • NewDnsProvider(name, type, metadata): defines a DNS service provider for your domain name or delegated zone. This is where DNSControl will push the DNS changes that you make.

      • D(name, registrar, modifiers): defines a domain name or delegated DNS zone for DNSControl to manage, as well as the DNS records present in the zone.

      You should configure NewRegistrar(), NewDnsProvider(), and D() accordingly using the Service Providers list in the official DNSControl documentation.

      If you're using DigitalOcean as your DNS provider, and only need to be able to make DNS changes (rather than authoritative nameservers as well), the sample in the preceding code block is already correct.

      Once complete, save and close the file.

      In this step, you set up a DNS configuration file for DNSControl, with the relevant providers defined. Next, you'll populate the file with some useful DNS records.

      Step 4 — Populating Your DNS Configuration File

      Next, you can populate the DNS configuration file with useful DNS records for your website or service, using the DNSControl syntax.

      Unlike traditional BIND zone files, where DNS records are written in a raw, line-by-line format, DNS records within DNSControl are defined as a function parameter (domain modifier) to the D() function, as shown briefly in Step 3.

      A domain modifier exists for each of the standard DNS record types, including A, AAAA, MX, TXT, NS, CAA, and so on. A full list of available record types is available in the Domain Modifiers section of the DNSControl documentation.

      Modifiers for individual records are also available (record modifiers). Currently these are primarily used for setting the TTL (time to live) of individual records. A full list of available record modifiers is available in the Record Modifiers section of the DNSControl documentation. Record modifiers are optional, and in most basic use cases can be left out.

      The syntax for setting DNS records varies slightly for each record type. Following are some examples for the most common record types:

      • A records:

        • Purpose: To point to an IPv4 address.
        • Syntax: A('name', 'address', optional record modifiers)
        • Example: A('@', 'your-server-ip', TTL(30))
      • AAAA records:

        • Purpose: To point to an IPv6 address.
        • Syntax: AAAA('name', 'address', optional record modifiers)
        • Example: AAAA('@', '2001:db8::1') (record modifier left out, so default TTL will be used)
      • CNAME records:

        • Purpose: To make your domain/subdomain an alias of another.
        • Syntax: CNAME('name', 'target', optional record modifiers)
        • Example: CNAME('subdomain1', 'example.org.') (note that a trailing . must be included if there are any dots in the value)
      • MX records:

        • Purpose: To direct email to specific servers/addresses.
        • Syntax: MX('name', 'priority', 'target', optional record modifiers)
        • Example: MX('@', 10, 'mail.example.net') (note that a trailing . must be included if there are any dots in the value)
      • TXT records:

        • Purpose: To add arbitrary plain text, often used for configurations without their own dedicated record type.
        • Syntax: TXT('name', 'content', optional record modifiers)
        • Example: TXT('@', 'This is a TXT record.')
      • CAA records:

        • Purpose: To restrict and report on Certificate Authorities (CAs) who can issue TLS certificates for your domain/subdomains.
        • Syntax: CAA('name', 'tag', 'value', optional record modifiers)
        • Example: CAA('@', 'issue', 'letsencrypt.org')

      In order to begin adding DNS records for your domain or delegated DNS zone, edit your DNS configuration file:

      • cd ~/dnscontrol
      • nano dnsconfig.js

      Next, you can begin populating the parameters for the existing D() function using the syntax described in the previous list, as well as the Domain Modifiers section of the official DNSControl documentation. A comma (,) must be used in-between each record.

      For reference, the code block here contains a full sample configuration for a basic, initial DNS setup:

      ~/dnscontrol/dnsconfig.js

      ...
      
      D('example.com', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
          A('@', 'your-server-ip'),
          A('www', 'your-server-ip'),
          A('mail', 'your-server-ip'),
          AAAA('@', '2001:db8::1'),
          AAAA('www', '2001:db8::1'),
          AAAA('mail', '2001:db8::1'),
          MX('@', 10, 'mail.example.com.'),
          TXT('@', 'v=spf1 -all'),
          TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@example.com; aspf=s; adkim=s;')
      );
      

      Once you have completed your initial DNS configuration, save and close the file.

      In this step, you set up the initial DNS configuration file, containing your DNS records. Next, you will test the configuration and deploy it.

      Step 5 — Testing and Deploying Your DNS Configuration

      In this step, you will run a local syntax check on your DNS configuration, and then deploy the changes to the live DNS server/provider.

      Firstly, move into your dnscontrol directory:

      Next, use the preview function of DNSControl to check the syntax of your file, and output what changes it will make (without actually making them):

      If the syntax of your DNS configuration file is correct, DNSControl will output an overview of the changes that it will make. This should look similar to the following:

      Output

      ******************** Domain: example.com ----- Getting nameservers from: digitalocean ----- DNS Provider: digitalocean...8 corrections #1: CREATE A example.com your-server-ip ttl=300 #2: CREATE A www.example.com your-server-ip ttl=300 #3: CREATE A mail.example.com your-server-ip ttl=300 #4: CREATE AAAA example.com 2001:db8::1 ttl=300 #5: CREATE TXT _dmarc.example.com "v=DMARC1; p=reject; rua=mailto:abuse@example.com; aspf=s; adkim=s;" ttl=300 #6: CREATE AAAA www.example.com 2001:db8::1 ttl=300 #7: CREATE AAAA mail.example.com 2001:db8::1 ttl=300 #8: CREATE MX example.com 10 mail.example.com. ttl=300 ----- Registrar: none...0 corrections Done. 8 corrections.

      If you see an error warning in your output, DNSControl will provide details on what and where the error is located within your file.

      Warning: The next command will make live changes to your DNS records and possibly other settings. Please ensure that you are prepared for this, including taking a backup of your existing DNS configuration, as well as ensuring that you have the means to roll back if needed.

      Finally, you can push out the changes to your live DNS provider:

      You'll see an output similar to the following:

      Output

      ******************** Domain: example.com ----- Getting nameservers from: digitalocean ----- DNS Provider: digitalocean...8 corrections #1: CREATE TXT _dmarc.example.com "v=DMARC1; p=reject; rua=mailto:abuse@example.com; aspf=s; adkim=s;" ttl=300 SUCCESS! #2: CREATE A example.com your-server-ip ttl=300 SUCCESS! #3: CREATE AAAA example.com 2001:db8::1 ttl=300 SUCCESS! #4: CREATE AAAA www.example.com 2001:db8::1 ttl=300 SUCCESS! #5: CREATE AAAA mail.example.com 2001:db8::1 ttl=300 SUCCESS! #6: CREATE A www.example.com your-server-ip ttl=300 SUCCESS! #7: CREATE A mail.example.com your-server-ip ttl=300 SUCCESS! #8: CREATE MX example.com 10 mail.example.com. ttl=300 SUCCESS! ----- Registrar: none...0 corrections Done. 8 corrections.

      Now, if you check the DNS settings for your domain in the DigitalOcean control panel, you'll see the changes.

      A screenshot of the DigitalOcean control panel, showing some of the DNS changes that DNSControl has made.

      You can also check the record creation by running a DNS query for your domain/delegated zone. You'll see that the records have been updated accordingly:

      You'll see output showing the IP address and relevant DNS record from your zone that was deployed using DNSControl. DNS records can take some time to propagate, so you may need to wait and run this command again.

      In this final step, you ran a local syntax check of the DNS configuration file, then deployed it to your live DNS provider, and tested that the changes were made successfully.

      Conclusion

      In this article you set up DNSControl and deployed a DNS configuration to a live provider. Now you can manage and test your DNS configuration changes in a safe, offline environment before deploying them to production.

      If you wish to explore this subject further, DNSControl is designed to be integrated into your CI/CD pipeline, allowing you to run in-depth tests and have more control over your deployment to production. You could also look into integrating DNSControl into your infrastructure build/deployment processes, allowing you to deploy servers and add them to DNS completely automatically.

      If you wish to go further with DNSControl, the following DigitalOcean articles provide some interesting next steps to help integrate DNSControl into your change management and infrastructure deployment workflows:



      Source link

      How to Use Ansible to Install and Set Up Docker on Ubuntu 18.04


      Introduction

      With the popularization of containerized applications and microservices, server automation now plays an essential role in systems administration. It is also a way to establish standard procedures for new servers and reduce human error.

      This guide explains how to use Ansible to automate the steps contained in our guide on How To Install and Use Docker on Ubuntu 18.04. Docker is an application that simplifies the process of managing containers, resource-isolated processes that behave in a similar way to virtual machines, but are more portable, more resource-friendly, and depend more heavily on the host operating system.

      While you can complete this setup manually, using a configuration management tool like Ansible to automate the process will save you time and establish standard procedures that can be repeated through tens to hundreds of nodes. Ansible offers a simple architecture that doesn’t require special software to be installed on nodes, and it provides a robust set of features and built-in modules which facilitate writing automation scripts.

      Pre-Flight Check

      In order to execute the automated setup provided by the playbook discussed in this guide, you’ll need:

      Testing Connectivity to Nodes

      To make sure Ansible is able to execute commands on your nodes, run the following command from your Ansible Control Node:

      This command will use Ansible's built-in ping module to run a connectivity test on all nodes from your default inventory file, connecting as the current system user. The ping module will test whether:

      • your Ansible hosts are accessible;
      • your Ansible Control Node has valid SSH credentials;
      • your hosts are able to run Ansible modules using Python.

      If you installed and configured Ansible correctly, you will get output similar to this:

      Output

      server1 | SUCCESS => { "changed": false, "ping": "pong" } server2 | SUCCESS => { "changed": false, "ping": "pong" } server3 | SUCCESS => { "changed": false, "ping": "pong" }

      Once you get a pong reply back from a host, it means you're ready to run Ansible commands and playbooks on that server.

      Note: If you are unable to get a successful response back from your servers, check our Ansible Cheat Sheet Guide for more information on how to run Ansible commands with custom connection options.

      What Does this Playbook Do?

      This Ansible playbook provides an alternative to manually running through the procedure outlined in our guide on How To Install and Use Docker on Ubuntu 18.04.

      Running this playbook will perform the following actions on your Ansible hosts:

      1. Install aptitude, which is preferred by Ansible as an alternative to the apt package manager.
      2. Install the required system packages.
      3. Install the Docker GPG APT key.
      4. Add the official Docker repository to the apt sources.
      5. Install Docker.
      6. Install the Python Docker module via pip.
      7. Pull the default image specified by default_container_image from Docker Hub.
      8. Create the number of containers defined by create_containers field, each using the image defined by default_container_image, and execute the command defined in default_container_command in each new container.

      Once the playbook has finished running, you will have a number of containers created based on the options you defined within your configuration variables.

      How to Use this Playbook

      To get started, we'll download the contents of the playbook to your Ansible Control Node. For your convenience, the contents of the playbook are also included in the next section of this guide.

      Use curl to download this playbook from the command line:

      • curl -L https://raw.githubusercontent.com/do-community/ansible-playbooks/master/docker/ubuntu1804.yml -o docker_ubuntu.yml

      This will download the contents of the playbook to a file named docker_ubuntu.yml in your current working directory. You can examine the contents of the playbook by opening the file with your command-line editor of choice:

      Once you've opened the playbook file, you should notice a section named vars with variables that require your attention:

      docker_ubuntu.yml

      . . .
      vars:
        create_containers: 4
        default_container_name: docker
        default_container_image: ubuntu
        default_container_command: sleep 1d
      . . .
      

      Here's what these variables mean:

      • create_containers: The number of containers to create.
      • default_container_name: Default container name.
      • default_container_image: Default Docker image to be used when creating containers.
      • default_container_command: Default command to run on new containers.

      Once you're done updating the variables inside docker_ubuntu.yml, save and close the file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

      You're now ready to run this playbook on one or more servers. Most playbooks are configured to be executed on all servers from your inventory, by default. We can use the -l flag to make sure that only a subset of servers, or a single server, is affected by the playbook. To execute the playbook only on server1, you can use the following command:

      • ansible-playbook docker_ubuntu.yml -l server1

      You will get output similar to this:

      Output

      ... TASK [Add Docker GPG apt Key] ******************************************************************************************************************** changed: [server1] TASK [Add Docker Repository] ********************************************************************************************************************* changed: [server1] TASK [Update apt and install docker-ce] ********************************************************************************************************** changed: [server1] TASK [Install Docker Module for Python] ********************************************************************************************************** changed: [server1] TASK [Pull default Docker image] ***************************************************************************************************************** changed: [server1] TASK [Create default containers] ***************************************************************************************************************** changed: [server1] => (item=1) changed: [server1] => (item=2) changed: [server1] => (item=3) changed: [server1] => (item=4) PLAY RECAP *************************************************************************************************************************************** server1 : ok=9 changed=8 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      Note: For more information on how to run Ansible playbooks, check our Ansible Cheat Sheet Guide.

      When the playbook is finished running, log in via SSH to the server provisioned by Ansible and run docker ps -a to check if the containers were successfully created:

      You should see output similar to this:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a3fe9bfb89cf ubuntu "sleep 1d" 5 minutes ago Created docker4 8799c16cde1e ubuntu "sleep 1d" 5 minutes ago Created docker3 ad0c2123b183 ubuntu "sleep 1d" 5 minutes ago Created docker2 b9350916ffd8 ubuntu "sleep 1d" 5 minutes ago Created docker1

      This means the containers defined in the playbook were created successfully. Since this was the last task in the playbook, it also confirms that the playbook was fully executed on this server.

      The Playbook Contents

      You can find the Docker playbook featured in this tutorial in the ansible-playbooks repository within the DigitalOcean Community GitHub organization. To copy or download the script contents directly, click the Raw button towards the top of the script, or click here to view the raw contents directly.

      The full contents are also included here for your convenience:

      docker_ubuntu.yml

      
      ---
      - hosts: all
        become: true
        vars:
          create_containers: 4
          default_container_name: docker
          default_container_image: ubuntu
          default_container_command: sleep 1d
      
        tasks:
          - name: Install aptitude using apt
            apt: name=aptitude state=latest update_cache=yes force_apt_get=yes
      
          - name: Install required system packages
            apt: name={{ item }} state=latest update_cache=yes
            loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'software-properties-common', 'python3-pip', 'virtualenv', 'python3-setuptools']
      
          - name: Add Docker GPG apt Key
            apt_key:
              url: https://download.docker.com/linux/ubuntu/gpg
              state: present
      
          - name: Add Docker Repository
            apt_repository:
              repo: deb https://download.docker.com/linux/ubuntu bionic stable
              state: present
      
          - name: Update apt and install docker-ce
            apt: update_cache=yes name=docker-ce state=latest
      
          - name: Install Docker Module for Python
            pip:
              name: docker
      
          # Pull image specified by variable default_image from the Docker Hub
          - name: Pull default Docker image
            docker_image:
              name: "{{ default_container_image }}"
              source: pull
      
          # Creates the number of containers defined by the variable create_containers, using default values
          - name: Create default containers
            docker_container:
              name: "{{ default_container_name }}{{ item }}"
              image: "{{ default_container_image }}"
              command: "{{ default_container_command }}"
              state: present
            with_sequence: count={{ create_containers }}
      
      

      Feel free to modify this playbook to best suit your individual needs within your own workflow. For example, you could use the docker_image module to push images to Docker Hub or the docker_container module to set up container networks.

      Conclusion

      Automating your infrastructure setup can not only save you time, but it also helps to ensure that your servers will follow a standard configuration that can be customized to your needs. With the distributed nature of modern applications and the need for consistency between different staging environments, automation like this has become a central component in many teams' development processes.

      In this guide, we demonstrated how to use Ansible to automate the process of installing and setting up Docker on a remote server. Because each individual typically has different needs when working with containers, we encourage you to check out the official Ansible documentation for more information and use cases of the docker_container Ansible module.

      If you'd like to include other tasks in this playbook to further customize your initial server setup, please refer to our introductory Ansible guide Configuration Management 101: Writing Ansible Playbooks.



      Source link