One place for hosting & domains

      Configure

      How To Configure a MongoDB Replica Set on Ubuntu 20.04


      An earlier version of this tutorial was written by Justin Ellingwood.

      Introduction

      MongoDB, also known as Mongo, is an open-source document database used in many modern web applications. It is classified as a NoSQL database because it does not rely on a traditional table-based relational database structure. Instead, it uses JSON-like documents with dynamic schemas. This means that, unlike relational databases, MongoDB does not require a predefined schema before you add data to a database.

      When working with databases, it’s often useful to have multiple copies of your data. This provides redundancy in case one of the database servers fails and can improve a database’s availability and scalability, as well as reduce read latencies. The practice of synchronizing data across multiple separate databases is called replication. In MongoDB, a group of servers that maintain the same data set through replication are referred to as a replica set.

      This tutorial provides a brief overview of how replication works in MongoDB and outlines how to configure and initiate a replica set with three members. In this example configuration, each member of the replica set will be a distinct MongoDB instance running on separate Ubuntu 20.04 servers.

      Note: Be aware that the procedure outlined in this guide is intended to demonstrate how to get a replica set up and running quickly. Upon completing this tutorial you’ll have a functioning replica set, but it will not have any security features enabled. This setup is not recommended for production environments.

      The Community version of MongoDB comes with two authentication methods that can help keep your database secure, keyfile authentication and x.509 authentication. For production deployments that employ replication, the MongoDB documentation recommends using x.509 authentication, and it describes keyfiles as “bare-minimum forms of security” that are “best suited for testing or development environments.” However, the process of obtaining and configuring x.509 certificates comes with a number of caveats and decisions that must be made on a case-by-case basis, which is beyond the scope of a DigitalOcean tutorial.

      If you plan on using your replica set for testing or development, we strongly encourage you to follow our tutorial on How To Configure Keyfile Authentication for MongoDB Replica Sets on Ubuntu 20.04.

      Prerequisites

      To complete this guide, you will need:

      • Three servers, each running Ubuntu 20.04. All three of these servers should have a non-root administrative user and a firewall configured with UFW. To set this up, follow our initial server setup guide for Ubuntu 20.04.
      • MongoDB installed on each of your Ubuntu servers. To this end, follow our tutorial on How To Install MongoDB on Ubuntu 20.04, making sure to complete each step on all three of your servers.

      Please note that, for clarity, this guide will refer to the three servers as mongo0, mongo1, and mongo2. Any examples showing commands or file changes performed on mongo0 will have a blue background, like this:

      Commands and file changes performed on mongo1 will have a pink background:

      Example actions on mongo2 will have a green background:

      Lastly, commands that must be run or file changes that must be made on every server will have a standard gray background, like this:

      Understanding MongoDB Replica Sets

      As mentioned in the introduction, MongoDB handles replication through an implementation called replica sets. Each running instance of MongoDB that’s part of a given replica set is referred to as one of its members. Every replica set must have one primary member and at least one secondary member.

      The primary member is the main access point for transactions with the replica set and is the only member that can accept write operations. Each replica set can have only one primary member at a time, as replication happens by copying the primary’s oplog (short for “operations log”) and repeating the logged changes on the secondaries’ respective data sets. Multiple primaries accepting write operations would lead to data conflicts.

      By default, applications will only query the primary member for both read and write operations. You can configure your setup to read from one or more of the secondary members, but since data is transferred asynchronously, reads from secondary nodes can result in old data being served. Thus, such a configuration isn’t ideal for every use case.

      One feature that distinguishes MongoDB’s replica sets from other replication implementations is their automatic failover mechanism. In the event that the primary member becomes unavailable, an automated election process happens among the secondary nodes to choose a new primary. A replica set can have up to 50 members, but a maximum of 7 can vote in an election.

      If the secondary member pool contains an even number of nodes, however, it could result in an inability to elect a new primary due to a voting impasse. This would necessitate the inclusion of a third type of member in the replica set: an arbiter. An arbiter is an optional member of a replica set that votes in situations like this to ensure that the set is able to reach a decision. Be aware, though, that arbiters do not have a copy of the data set and they’re barred from becoming the replica set’s primary. If a replica set has only one secondary member, then an arbiter is required.

      There may be times when you don’t want all of your secondaries to follow the standard rules for secondary members of a replica set. MongoDB allows you to configure secondary members of a replica set to take on the following nonstandard roles:

      • Priority 0 Replication Members: There are some situations where the election of certain set members to the primary position could have a negative impact on your application’s performance. For instance, if you are replicating data to a remote datacenter or a certain secondary member’s hardware is inadequate for it to function as the main access point for the set, setting its priority to 0 can ensure that this member will not become a primary but can continue copying data.
      • Hidden Replication Members: Some situations require you to keep one set of members accessible and visible to your clients while hiding background members which have separate purposes and shouldn’t be used for read operations. As an example, you may need a secondary member to be the base for analytics work, which would benefit from an up-to-date dataset but would cause a strain on working members. By setting this member to hidden, it will not interfere with the general operations of the replica set. Hidden members must be set to a priority of 0 to avoid becoming the primary member, but they can vote in elections.
      • Delayed Replication Members: By setting the delay option for a secondary member, you can control how long the secondary waits to perform each action it copies from the primary’s oplog. This is useful if you would like to safeguard against accidental deletions or recover from destructive operations. For instance, if you delay a secondary by a half-day, it would not immediately perform accidental operations on its own set of data and could be used to revert changes. Delayed members cannot become primary members, but can vote in elections. In most situations, they should also be hidden to prevent application processes from reading data that is out-of-date.

      Step 1 — Configuring DNS Resolution

      When it comes time to initialize your replica set in Step 4, you’ll need to provide an address where each replica set member can be reached by the other two in the set. The MongoDB documentation recommends against using IP addresses when configuring a replica set, since IP addresses can change unexpectedly. Instead, MongoDB recommends using logical DNS hostnames when configuring replica sets.

      One way to do this is to configure subdomains for each replication member. Although configuring subdomains would be ideal for a production environment or another long-term solution, this tutorial will outline how to configure DNS resolution by editing each server’s respective hosts files.

      hosts is a special file that allows you to assign human-readable hostnames to numerical IP addresses. This means that if the IP address of any of your servers ever changes, you’ll only have to update the hosts file on the three servers instead of reconfiguring the replica set.

      On Linux and other Unix-like systems, hosts is stored in the /etc/ directory. On each of your three servers, edit the file with your preferred text editor. Here, we’ll use nano:

      After the first few lines which configure the localhost, add an entry for each member of the replica set. These entries take the form of an IP address followed by the human-readable name of your choice, as in this example:

      /etc/hosts

      IP_address   any_hostname
      

      You can configure your servers to use whatever hostname you’d like, but it can be helpful to make each hostname descriptive. In examples throughout this guide, the three servers will use these hostnames:

      • mongo0.replset.member
      • mongo1.replset.member
      • mongo2.replset.member

      Using these hostnames, your /etc/hosts files would look similar to the following highlighted lines:

      /etc/hosts

      . . .
      127.0.0.1 localhost
      
      203.0.113.0 mongo0.replset.member
      203.0.113.1 mongo1.replset.member
      203.0.113.2 mongo2.replset.member
      . . .
      

      If you don’t know your servers’ IP addresses offhand, you can run the following curl command on each server to retrieve them. icanhazip.com is a website that shows the IP address of whatever computer is used to access it. By providing its URL as an argument to the curl command, the command will print the IP address of the server from which you run it to standard output:

      If you’re using DigitalOcean Droplets, you can also find your servers’ IP addresses in the Control Panel.

      The new lines you add here should be identical on each of the three hosts in your set. Save and close the file on each of your servers. If you used nano to edit these files, do so by pressing CTRL + X, Y, and then ENTER.

      After editing, saving, and closing the hosts file on each of your servers, you’ll have finished configuring DNS resolution for your replica set. You can now move on to updating each server’s firewall rules to allow them to communicate with one another.

      Step 2 — Updating Each Server’s Firewall Configurations with UFW

      Assuming you followed the prerequisite initial server setup guide you will have set up a firewall on each of the servers on which you’ve installed MongoDB and enabled access for the OpenSSH UFW profile. This is an important security measure, as these firewalls currently block connections to any port on your servers, save for ssh connections that present keys which align with those in each server’s respective authorized_keys file.

      However, these firewalls will also block the MongoDB instances on each server from communicating with one another, preventing you from initiating the replica set. To correct this, you’ll need to add new firewall rules to allow each server access to the port on the other two servers on which MongoDB is listening for connections.

      On mongo0, run the following ufw command to allow mongo1 access to port 27017 on mongo0:

      • sudo ufw allow from mongo1_server_ip to any port 27017

      Be sure to change mogno1_server_ip to reflect your mongo1 server’s actual IP address. Note that ufw commands will not work with hostnames configured in the hosts file, so be sure to use your servers’ actual IP addresses in this command and the following ones. Also, if you’ve updated the Mongo instance on this server to use a non-default port, be sure to change 27017 to reflect the port that your MongoDB instance is actually using.

      Then add another firewall rule to give mongo2 access to the same port:

      • sudo ufw allow from mongo2_server_ip to any port 27017

      Next, update the firewall rules for your other two servers. Run the following commands on mongo1, making sure to change the IP addresses to reflect those of mongo0 and mongo2, respectively:

      • sudo ufw allow from mongo0_server_ip to any port 27017
      • sudo ufw allow from mongo2_server_ip to any port 27017

      Lastly, run these two commands on mongo2. Again, be sure that you enter the correct IP addresses for each server:

      • sudo ufw allow from mongo0_server_ip to any port 27017
      • sudo ufw allow from mongo1_server_ip to any port 27017

      After adding these UFW rules, each of your three MongoDB servers will be allowed to access the port used by MongoDB on the other two servers. However, you won’t be able to test this yet, since the Mongo instance on each server is currently blocking any external connections. After you enable replication by updating each MongoDB instance’s configuration file in the next step, you will be able to perform this test.

      Step 3 — Enabling Replication in Each Server’s MongoDB Configuration File

      At this point, you’ve edited your servers’ /etc/hosts files to configure hostnames which will resolve to each one’s IP address. You’ve also opened up each of your servers’ firewalls to allow the other two servers access to the default MongoDB port, 27107. Now you’re ready to begin configuring the MongoDB installation on each server to enable replication.

      This step outlines how to do this by editing MongoDB’s configuration file, /etc/mongod.conf. You must complete every procedure in this step on each server, but for demonstration purposes we will use mongo0 in examples.

      On mongo0, open the MongoDB configuration file in your preferred text editor:

      • sudo nano /etc/mongod.conf

      Even though you’ve opened up each server’s firewall to allow the other servers access to port 27017, MongoDB is currently bound to 127.0.0.1, the local loopback network interface. This means that MongoDB is only able to accept connections that originate on the server where it’s installed.

      To allow remote connections, you must bind MongoDB to your servers’ publicly-routable IP addresses in addition to 127.0.0.1. This way, your MongoDB installation will be able to listen to connections made to your MongoDB server from remote machines.

      Find the network interfaces section. It will look like this by default:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1
      . . .
      

      Append a comma to the line beginning with bindIp: followed by mongo0’s hostname or public IP address. This example uses the hostname configured in Step 1:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1,mongo0.replset.member
      . . .
      

      Next, find the line that reads #replication: towards the bottom of the file. It will look like this:

      /etc/mongod.conf

      . . .
      #replication:
      . . . 
      

      Uncomment this line by removing the pound sign (#). Then add a replSetName directive below this line followed by a name which MongoDB will use to identify the replica set:

      /etc/mongod.conf

      . . .
      replication:
        replSetName: "rs0"
      . . . 
      

      In this example, the replSetName directive’s value is "rs0". You can provide whatever name you’d like here, but it can be helpful to use a descriptive name. Keep in mind, though, that each server’s mongod.conf file must have the same name after the replSetName directive in order for each of their MongoDB instances to become members of the same replica set.

      Note that there are two spaces before the replSetName directive and that the name is wrapped in quotation marks ("), both of which are necessary for this configuration to be read properly.

      After updating these two sections of the file, net and replication, they will look like this:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1,mongo0.replset.member
      . . .
      replication:
        replSetName: "rs0"
      . . . 
      

      Save and close the file. Then make these same changes to the /etc/mongod.conf files on mongo1 and mongo2. After doing so, these updated sections will look like this in mongo1’s configuration file:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1,mongo1.replset.member
      . . .
      replication:
        replSetName: "rs0"
      . . . 
      

      And here’s how these sections will look in mongo2’s configuration file:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1,mongo2.replset.member
      . . .
      replication:
        replSetName: "rs0"
      . . . 
      

      To reiterate, the IP address or hostname you add to each server’s bindIp directive must be that of the server whose mongod.conf file you’re editing.

      After making these changes to each server’s mongod.conf file, save and close each file. Then, restart the mongod service on each server by issuing the following command:

      • sudo systemctl restart mongod

      With that, you’ve enabled replication for each server’s MongoDB instance.

      Note: At this point, you can use the nc command to test whether the firewall rules you added in Step 2 are correct. nc, short for netcat, is a utility used to establish network connections with TCP or UDP. It’s useful for testing in cases like this because it allows you to specify both an IP address and a port number when making a connection.

      The following example nc command includes the -z option, which limits the utility to only scan for a listening daemon on the target server without sending it any data. Recall from the prerequisite installation tutorial that MongoDB is running as a service daemon, making this option useful for testing connectivity. It also includes the v option which increases the command’s verbosity, causing it to return more information than it would otherwise.

      This example nc command shows an attempt to reach mongo1 from mongo0:

      • nc -zv mongo1.replset.member 27017

      The following output indicates that mongo0 is able to reach mongo1 on the port used by MongoDB:

      Output

      Connection to mongo1.replset.member 27017 port [tcp/*] succeeded!

      You can test the connection between each pair of servers by repeating this command on each server and specifying the appropriate hostnames or IP addresses.

      After editing each server’s mongod.conf file to enable replication and restarting the mongod service, you’re ready to initiate the replica set and add each Mongo instance as a member.

      Step 4 — Starting the Replica Set and Adding Members

      Now that you’ve configured each of your three MongoDB installations, you can open up a MongoDB shell to initiate replication and add each as a member.

      For demonstration purposes, the examples in this step will use the MongoDB instance on mongo0 to initiate the replica set. However, you can initiate replication from any server whose mongod.conf file has been appropriately configured.

      On mongo0, open up the MongoDB shell:

      From the prompt, you can initiate a replica set from the mongo shell by running the rs.initiate() method. However, running this method by itself would only initiate replication for the machine on which you run the method, and you’d then need to add your other Mongo instances by issuing an rs.add() method for each member.

      Recall that MongoDB stores its data in JSON-like structures known as documents. Because you’ve already edited the mongod.conf file on each of your servers to configure the three Mongo instances for replication, you can instead include a document that holds each member’s configuration details within the rs.initiate method. This will allow you to start the replica set and add each member at once, rather than having to run multiple separate methods.

      To do this, begin an rs.initiate() method by typing the following and pressing ENTER:

      Mongo won’t register the rs.initiate method as complete until you enter a closing parenthesis. Until you do, the prompt will change from a greater than sign (>) to an ellipsis (...).

      As with objects in JSON, documents in MongoDB begin and end with curly braces ({ and }). To begin adding the replica set’s configuration document, enter an opening curly brace:

      MongoDB documents are composed of any number of field-and-value pairs that take the form of field: value. The first field-and-value pair of this particular document must be an _id: field that provides a name to identify the replica set; this field’s value must be the same as the replSetName directive you set in your mongod.conf files, which is "rs0" in our examples.

      Enter this field-and-value pair, following it with a comma, and then press ENTER to begin a new line:

      Next, add a members: field. Instead of a single value, though, follow this members: field with an array containing multiple documents, each of which represent a replica set member to add. In MongoDB documents, arrays are always placed within a pair of square brackets ([ and ]).

      Add the members: field followed by an opening square bracket to begin the array, and then press ENTER to move to the next line:

      Now add a document with two field-and-value pairs, separated by a comma, to represent the first member of the replica set. The first of this document’s fields is another _id: field which accepts an integer used to identify the member internally. The second is a host: field, which must be followed by a string containing a hostname that will resolve to an address where the member Mongo instance can be reached:

      • { _id: 0, host: "mongo0.replset.member" },

      Note: If any of your Mongo instances are running on a port other than MongoDB’s default — 27017 — you must follow the hostname with a colon (:) and then the port number, as in this example:

      • { _id: 0, host: "mongo0.replset.member:27018" },

      After entering the first one, enter additional documents for the other members of your replica set. Make sure to separate each document with a comma:

      • { _id: 1, host: "mongo1.replset.member" },
      • { _id: 2, host: "mongo2.replset.member" }

      Next, end the array by entering a closing square bracket:

      Lastly, end the configuration document with a closing curly brace, and then close the method with a closing parenthesis:

      All together, the rs.initiate() method will look like this:

      > rs.initiate(
      ... {
      ... _id: "rs0",
      ... members: [
      ... { _id: 0, host: "mongo0.replset.member" },
      ... { _id: 1, host: "mongo1.replset.member" },
      ... { _id: 2, host: "mongo2.replset.member" }
      ... ]
      ... })
      

      Assuming that you entered all the details correctly, once you press ENTER after typing the closing parenthesis the method will run and initiate the replica set. If the method returns "ok" : 1 in the output, it means that the replica set was started correctly:

      Output

      { "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1612389071, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1612389071, 1) }

      If the replica set was initiated as expected, you’ll notice that the MongoDB client’s prompt will change from just a greater-than sign (>) to the following:

      MongoDB comes installed with a few built-in methods which you can use to manage and retrieve information about your replica set. Of these, the rs.help() method can be particularly helpful as it returns a list of these replica set methods and descriptions of what they do:

      Output

      rs.status() { replSetGetStatus : 1 } checks repl set status rs.initiate() { replSetInitiate : null } initiates set with default settings rs.initiate(cfg) { replSetInitiate : cfg } initiates set with configuration cfg rs.conf() get the current configuration object from local.system.replset rs.reconfig(cfg) updates the configuration of a running replica set with cfg (disconnects) rs.add(hostportstr) add a new member to the set with default attributes (disconnects) rs.add(membercfgobj) add a new member to the set with extra attributes (disconnects) rs.addArb(hostportstr) add a new member which is arbiterOnly:true (disconnects) rs.stepDown([stepdownSecs, catchUpSecs]) step down as primary (disconnects) rs.syncFrom(hostportstr) make a secondary sync from the given member rs.freeze(secs) make a node ineligible to become primary for the time specified rs.remove(hostportstr) remove a host from the replica set (disconnects) rs.secondaryOk() allow queries on secondary nodes rs.printReplicationInfo() check oplog size and time range rs.printSecondaryReplicationInfo() check replica set members and replication lag db.isMaster() check who is primary db.hello() check who is primary reconfiguration helpers disconnect from the database so the shell will display an error, even if the command succeeds.

      After running rs.help() or another one of these methods, you may see the client prompt change again to the following:

      This means that the MongoDB instance that you’re connected to was elected to serve as the primary set member.

      Be aware that if you have additional nodes that you’d like to add to the replica set in the future, you can do so with the rs.add() method after configuring them as you did the current replica set members in the previous steps:

      • rs.add( "mongo3.replset.member" )

      You can now close the MongoDB client by pressing CTRL + C or by running the exit command:

      Your replica set is now up and running, and you can begin integrating it with your application.

      Warning: When you opened up the MongoDB prompt to initiate the replica set, you may have noticed a warning message like this:

      . . .
              2021-02-03T21:45:48.379+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
      . . .
      

      This message indicates that you haven’t yet enabled access control for your database. Per the MongoDB documentation:

      MongoDB uses Role-Based Access Control (RBAC) to govern access to a MongoDB system. A user is granted one or more roles that determine the user’s access to database resources and operations.

      Because access control hasn’t been enabled on any of your MongoDB instances, anyone with access to any of the three servers in the replica set could also gain access to the Mongo instance on that server. This poses an important security risk, since this means they could also gain access to your application data.

      One way to remove this warning and add a layer of security to your replica set is by configuring keyfile authentication. As mentioned in the introduction, though, the MongoDB documentation describes keyfiles as “bare-minimum forms of security” that are “best suited for testing or development environments.”

      Be aware that, for production deployments, the MongoDB documentation instead recommends using x.509 certificates for internal member authentication. The process of obtaining and configuring x.509 certificates comes with a number of caveats and decisions that must be made on a case-by-case basis, which is beyond the scope of this tutorial.

      If you plan on using your replica set for testing or development, we strongly encourage you to follow our tutorial on How To Configure Keyfile Authentication for MongoDB Replica Sets on Ubuntu 20.04.

      Conclusion

      Database replication has found wide use as a strategy to improve performance, availability, and data security, to the point where it’s recommended that any database used in a production environment has some form of replication enabled. Replicas are also versatile, and can take on many different roles in a data architecture, like reporting or disaster recovery. The automatic failover feature found in MongoDB’s replica sets make them particularly valuable for helping to ensure that your data remains highly available in the event of an outage.

      If you’d like to learn more about MongoDB, we encourage you to check out our entire collection of MongoDB tutorials.



      Source link

      How To Configure Keyfile Authentication for MongoDB Replica Sets on Ubuntu 20.04


      Introduction

      MongoDB, also known as Mongo, is an open-source document database used in many modern web applications. It is classified as a NoSQL database because it does not rely on the relational database model. Instead, it uses JSON-like documents with dynamic schemas. This means that, unlike relational databases, MongoDB does not require a predefined schema before you add data to a database.

      When you’re working with multiple distributed MongoDB instances, as in the case of a replica set or a sharded database architecture, it’s important to ensure that the communications between them are secure. One way to do this is through keyfile authentication. This involves creating a special file that essentially functions as a shared password for each member in the cluster.

      This tutorial outlines how to update an existing replica set to use keyfile authentication. The procedure involved in this guide will also ensure that the replica set doesn’t go through any downtime, so the data within the replica set will remain available for any clients or applications that need access to it.

      Prerequisites

      To complete this tutorial, you will need:

      • Three servers, each running Ubuntu 20.04. All three of these servers should have an administrative non-root user and a firewall configured with UFW. To set this up, follow our initial server setup guide for Ubuntu 20.04.
      • MongoDB installed on each of your Ubuntu servers. Follow our tutorial on How To Install MongoDB on Ubuntu 20.04, making sure to complete each step on each of your servers.
      • All three of your MongoDB installations configured as a replica set. Follow this tutorial on How To Configure a MongoDB Replica Set on Ubuntu 20.04 to set this up.
      • SSH keys generated for each server. In addition, you should ensure that each server has the other two servers’ public keys added to its authorized_keys file. This is to ensure that each machine can communicate with one another over SSH, which will make it easier to distribute the keyfile to each of them in Step 2. To set these up, follow our guide on How To Set Up SSH Keys on Ubuntu 20.04.

      Please note that, for clarity, this guide will follow the conventions established in the prerequisite replica set tutorial and refer to the three servers as mongo0, mongo1, and mongo2. It will also assume that you’ve completed Step 1 of that guide and configured each server’s hosts file so that the following hostnames will resolve to given server’s IP address:

      Hostname Resolves to
      mongo0.replset.member mongo0
      mongo1.replset.member mongo1
      mongo2.replset.member mongo2

      There are a few instances in this guide in which you must run a command or update a file on only one of these servers. In such cases, this guide will default to using mongo0 in examples and will signify this by showing commands or file changes in a blue background, like this:

      Any commands that must be run or file changes that must be made on multiple servers will have a standard gray background, like this:

      About Keyfile Authentication

      In MongoDB, keyfile authentication relies on Salted Challenge Response Authentication Mechanism (SCRAM), the database system’s default authentication mechanism. SCRAM involves MongoDB reading and verifying credentials presented by a user against a combination of their username, password, and authentication database, all of which are known by the given MongoDB instance. This is the same mechanism used to authenticate users who supply a password when connecting to the database.

      In keyfile authentication, the keyfile acts as a shared password for each member in the cluster. A keyfile must contain between 6 and 1024 characters. Keyfiles can only contain characters from the base64 set, and note that MongoDB strips whitespace characters when reading keys. Beginning in version 4.2 of Mongo, keyfiles use YAML format, allowing you to share multiple keys in a single keyfile.

      Warning: The Community version of MongoDB comes with two authentication methods that can help keep your database secure, keyfile authentication and x.509 authentication. For production deployments that employ replication, the MongoDB documentation recommends using x.509 authentication, and it describes keyfiles as “bare-minimum forms of security” that are “best suited for testing or development environments.”

      The process of obtaining and configuring x.509 certificates comes with a number of caveats and decisions that must be made on a case-by-case basis, meaning that this procedure is beyond the scope of a DigitalOcean tutorial. If you plan on using a replica set in a production environment, we strongly encourage you to review the official MongoDB documentation on x.509 authentication.

      If you plan on using your replica set for testing or development, you can proceed with following this tutorial to add a layer of security to your cluster.

      Step 1 — Creating a User Administrator

      When you enable authentication in MongoDB, it will also enable role-based access control for the replica set. Per the MongoDB documentation:

      MongoDB uses Role-Based Access Control (RBAC) to govern access to a MongoDB system. A user is granted one or more roles that determine the user’s access to database resources and operations.

      When access control is enabled on a MongoDB instance, it means that you won’t be able to access any of the resources on the system unless you’ve authenticated as a valid MongoDB user. Even then, you must authenticate as a user with the appropriate privileges to access a given resource.

      If you don’t create a user for your MongoDB system before enabling keyfile authentication (and, consequently, access control), you will not be locked out of your replica set. You can create a MongoDB user which you can use to authenticate to the set and, if necessary, create other users through Mongo’s localhost exception. This is a special exception MongoDB makes for configurations that have enabled access control but lack users. This exception only allows you to connect to the database on the localhost and then create a user in the admin database.

      However, relying on the localhost exception to create a MongoDB user after enabling authentication means that your replica set will go through a period of downtime, since the replicas will not be able to authenticate their connection until after you create a user. This step outlines how to create a user before enabling authentication to ensure that your replica set remains available. This user will have permissions to create other users on the database, giving you the freedom to create other users with whatever permissions they need in the future. In MongoDB, a user with such permissions is known as a user administrator.

      To begin, connect to the primary member of your replica set. If you aren’t sure which of your members is the primary, you can run the rs.status() method to identify it.

      Run the following mongo command from the bash prompt of any of the Ubuntu servers hosting a MongoDB instance in your replica set. This command’s --eval option instructs the mongo operation to not open up the shell interface environment that appears when you run mongo by itself and instead run the command or method, wrapped in single quotes, that follows the --eval argument:

      • mongo --eval 'rs.status()'

      rs.status() returns a lot of information, but the relevant portion of the output is the "members" : array. In the context of MongoDB, an array is a collection of documents held between a pair of square brackets ([ and ]).

      In the "members": array you’ll find a number of documents, each of which contains information about one of the members in your replica set. Within each of these member documents, find the "stateStr" field. The member whose "stateStr" value is "PRIMARY" is the primary member of your replica set. The following example shows a situation where mongo0 is the primary:

      Output

      . . . "members" : [ { "_id" : 0, "name" : "mongo0.replset.member:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", . . . }, . . .

      Once you know which of your replica set members is the primary, SSH into the server hosting that instance. For demonstration purposes, this guide will continue to use examples in which mongo0 is the primary:

      • ssh sammy@mongo0_ip_address

      After logging into the server, connect to MongoDB by opening up the mongo shell environment:

      When creating a user in MongoDB, you must create them within a specific database which will be used as their authentication database. The combination of the user’s name and their authentication database serve as a unique identifier for that user.

      Certain administrative actions are only available to users whose authentication database is the admin database — a special privileged database included in every MongoDB installation — including the ability to create new users. Because the goal of this step is to create an user administrator that can create other users in the replica set, connect to the admin database so you can grant this user the appropriate privileges:

      Output

      switched to db admin

      MongoDB comes installed with a number of JavaScript-based shell methods you can use to manage your database. One of these, the db.createUser method, is used to create new users in the database in which the method is run.

      Initiate the db.createUser method:

      Note: Mongo won’t register the db.createUser method as complete until you enter a closing parenthesis. Until you do, the prompt will change from a greater than sign (>) to an ellipsis (...).

      This method requires you to specify a username and password for the user, as well as any roles you want the user to have. Recall that MongoDB stores its data in JSON-like documents; when you create a new user, all you’re doing is creating a document to hold the appropriate user data as individual fields.

      As with objects in JSON, documents in MongoDB begin and end with curly braces ({ and }). Enter an opening curly brace to begin the user document:

      Next, enter a user: field, with your desired username as the value in double quotes followed by a comma. The following example specifies the username UserAdminSammy, but you can enter whatever username you like:

      Next, enter a pwd field with the passwordPrompt() method as its value. When you execute the db.createUser method, the passwordPrompt() method will provide a prompt for you to enter your password. This is more secure than the alternative, which is to type out your password in cleartext as you did for your username.

      Note: The passwordPrompt() method is only compatible with MongoDB versions 4.2 and newer. If you’re using an older version of Mongo, then you will have to write out your password in cleartext, similarly to how you wrote out your username:

      Be sure to follow this field with a comma as well:

      Then enter a roles field followed by an array detailing the roles you want your administrative user to have. In MongoDB, roles define what actions the user can perform on the resources that they have access to. You can define custom roles yourself, but Mongo also comes with a number of built-in roles that grant commonly-needed permissions.

      Because you’re creating a user administrator, at a minimum you should grant them the built-in userAdminAnyDatabase role over the admin database. This will allow the user administrator to create and modify new users and roles. Because the administrative user has this role in the admin database, this will also grant it superuser access to the entire cluster:

      • roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]

      Following that, enter a closing brace to signify the end of the document:

      Then enter a closing parenthesis to close and execute the db.createUser method:

      All together, here’s what your db.createUser method should look like:

      > db.createUser(
      ... {
      ... user: "UserAdminSammy",
      ... pwd: passwordPrompt(),
      ... roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
      ... }
      ... )
      

      If each line’s syntax is correct, the method will execute properly and you’ll be prompted to enter a password:

      Output

      Enter password:

      Enter a strong password of your choosing. Then, you’ll receive a confirmation that the user was added:

      Output

      Successfully added user: { "user" : "UserAdminSammy", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" }, "readWriteAnyDatabase" ] }

      With that, you’ve added a MongoDB user profile which you can use to manage other users and roles on your system. You can test this out by creating another user, as outlined in the remainder of this step.

      Begin by authenticating as the user administrator you just created:

      • db.auth( "UserAdminSammy", passwordPrompt() )

      db.auth() will return 1 if authentication was successful:

      Output

      1

      Note: In the future, if you want to authenticate as the user administrator when connecting to the cluster, you can do so directly from your server prompt with a command like the following:

      • mongo -u "UserAdminSammy" -p --authenticationDatabase "admin"

      In this command, the -u option tells the shell that the following argument is the username which you want to authenticate as. The -p flag tells it to prompt you to enter a password, and the --authenticationDatabase option precedes the name of the user’s authentication database. If you enter an incorrect password or the username and authentication database do not match, you won’t be able to authenticate and you’ll have to try connecting again.

      Also, be aware that in order for you to create new users in the replica set as the user administrator, you must be connected to the set’s primary member.

      The procedure for adding another user is the same as it was for the user administrator. The following example creates a new user with the clusterAdmin role, which means they will be able to perform a number of operations related to replication and sharding. Within the context of MongoDB, a user with these privileges is known as a cluster administrator.

      Having a dedicated user to perform specific functions like this is a good security practice, as it limits the number of privileged users you have on your system. After you enable keyfile authentication later in this tutorial, any client that wants to perform any of the operations allowed by the clusterAdmin role — such as any of the rs. methods, like rs.status() or rs.conf() — must first authenticate as the cluster administrator.

      That said, you can provide whatever role you’d like to this user, and likewise provide them with a different name and authentication database. However, if you want the new user to function as a cluster administrator, then you must grant them the clusterAdmin role within the admin database.

      In addition to creating a user to serve as the cluster administrator, the following method names the user ClusterAdminSammy and uses the passwordPrompt() method to prompt you to enter a password:

      • db.createUser(
      • {
      • user: "ClusterAdminSammy",
      • pwd: passwordPrompt(),
      • roles: [ { role: "clusterAdmin", db: "admin" } ]
      • }
      • )

      Again, if you’re using a version of MongoDB that precedes version 4.2, then you will have to write out your password in cleartext instead of using the passwordPrompt() method.

      If each line’s syntax is correct, the method will execute properly and you’ll be prompted to enter a password:

      Output

      Enter password:

      Enter a strong password of your choosing. Then, you’ll receive a confirmation that the user was added:

      Output

      Successfully added user: { "user" : "ClusterAdminSammy", "roles" : [ { "role" : "clusterAdmin", "db" : "admin" } ] }

      This output confirms that your user administrator is able to create new users and grant them roles. You can now close the MongoDB shell:

      Alternatively, you can close the shell by pressing CTRL + C.

      At this point, if you have any clients or applications connected to your MongoDB cluster, it would be a good time to create one or more dedicated users with the appropriate roles which they can use to authenticate to the database. Otherwise, read on to learn how to generate a keyfile, distribute it among the members of your replica set, and then configure each one to require the replica set members to authenticate with the keyfile.

      Step 2 — Creating and Distributing an Authentication Keyfile

      Before creating a keyfile, it can be helpful to create a directory on each server where you will store the keyfile in order to keep things organized. Run the following command, which creates a directory named mongo-security in the administrative Ubuntu user’s home directory, on each of your three servers:

      Then generate a keyfile on one of your servers. You can do this on any one of your servers but, for illustration purposes, this guide will generate the keyfile on mongo0.

      Navigate to the mongo-security directory you just created:

      Within that directory, create a keyfile with the following openssl command:

      • openssl rand -base64 768 > keyfile.txt

      Take note of this command’s arguments:

      • rand: instructs OpenSSL to generate pseudo-random bytes of data
      • -base64: specifies that the command should use base64 encoding to represent the pseudo-random data as printable text. This is important because, as mentioned previously, MongoDB keyfiles can only contain characters in the base64 set
      • 768: the number of bytes the command should generate. In base64 encoding, three binary bytes of data are represented as four characters. Because MongoDB keyfiles can have a maximum of 1024 characters, 768 is the maximum number of bytes you can generate for a valid keyfile

      Following this command’s 768 argument is a greater-than sign (>). This redirects the command’s output into a new file named keyfile.txt which will serve as your keyfile. Feel free to name the keyfile something other than keyfile.txt if you’d like, but be sure to change the filename whenever it appears in later commands.

      Next, modify the keyfile’s permissions so that only the owner has read access:

      Following this, distribute the keyfile to the other two servers hosting the MongoDB instances in your replica set. Assuming you followed the prerequisite guide on How To Set Up SSH Keys, you can do so with the scp command:

      • scp keyfile.txt sammy@mongo1.replset.member:/home/sammy/mongo-security
      • scp keyfile.txt sammy@mongo2.replset.member:/home/sammy/mongo-security

      Notice that each of these commands copies the keyfile directly to the ~/mongo-security/ directories you created previously on mongo1 and mongo2. Be sure to change sammy to the name of the administrative Ubuntu user profile you created on each server.

      Next, change the file’s owner to the mongodb user profile. This is a special user that was created when you installed MongoDB, and it’s used to run the mongod service. This user must have access to the keyfile in order for MongoDB to use it for authentication.

      Run the following command on each of your servers to change the keyfile’s owner to the mongodb user account:

      • sudo chown mongodb:mongodb ~/mongo-security/keyfile.txt

      After changing the keyfiles’ owner on each server, you’re ready to reconfigure each of your MongoDB instances to enforce keyfile authentication.

      Step 3 — Enabling Keyfile Authentication

      Now that you’ve generated a keyfile and distributed it to each of the servers in your replica set, you can update the MongoDB configuration file on each server to enforce keyfile authentication.

      In order to avoid any downtime while configuring the members of your replica set to require authentication, this step involves reconfiguring the secondary members of the set first. Then, you’ll direct your primary member to step down and become a secondary member. This will cause the secondary members to hold an election to select a new primary, keeping your cluster available to whatever clients or applications need access to it. You’ll then reconfigure the former primary node to enable authentication.

      On each of your servers hosting a secondary member of your replica set, open up MongoDB’s configuration file with your preferred text editor:

      • sudo nano /etc/mongod.conf

      Within the file, find the security section. It will look like this by default:

      /etc/mongod.conf

      . . .
      #security:
      . . .
      

      Uncomment this line by removing the pound sign (#). Then, on the next line, add a keyFile: directive followed by the full path to the keyfile you created in the previous step:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
      . . .
      

      Note that there are two spaces at the beginning of this new line. These are necessary for the configuration file to be read correctly. When you enter this line in your own configuration files, make sure that the path you provide reflects the actual path of the keyfile on each server.

      Below the keyFile directive, add a transitionToAuth directive with a value of true. When set to true, this configuration option allows the MongoDB instance to accept both authenticated and non-authenticated connections. This is useful when reconfiguring a replica set to enforce authentication, as it will ensure that your data remains available as you restart each member of the set:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
        transitionToAuth: true
      . . .
      

      Again, make sure that you include two blank spaces before the transitionToAuth directive.

      After making those changes, save and close the file. If you used nano to edit it, you can do so by pressing CTRL + X, Y, and then ENTER.

      Then restart the mongod service on both of the secondary instances’ servers to immediately put these changes into effect:

      • sudo systemctl restart mongod

      With that, you’ve configured keyfile authentication for the secondary members of your replica set. At this point, both authenticated and non-authenticated users can access these members without restriction.

      Next, you’ll repeat this procedure on the primary member. Before doing so, though, you must step down the member so it’s no longer the primary. To do this, open up the MongoDB shell on the server hosting the primary member. For illustration purposes, this guide will again assume this is mongo0:

      From the prompt, run the rs.stepDown() method. This will instruct the primary to become a secondary member, and will cause the current secondary members to hold an election to determine which will serve as the new primary:

      If the method returns "ok" : 1 in the output, it means the primary member successfully stepped down to become a secondary:

      Output

      { "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1614795467, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1614795467, 1) }

      After stepping down the primary, you can close the Mongo shell:

      Next, open up the MongoDB configuration file on this server:

      • sudo nano /etc/mongod.conf

      Find the security section and uncomment the security header by removing the pound sign. Then add the same keyFile and transitionToAuth directives you added to the other MongoDB instances. After making these changes, the security section will look like this:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
        transitionToAuth: true
      . . .
      

      Again, make sure that the file path after the keyFile directive reflects the keyfile’s actual location on this server.

      When finished, save and close the file. Then restart the mongod process:

      • sudo systemctl restart mongod

      Following that, all of your MongoDB instances are able to accept both authenticated and non-authenticated connections. In the final step of this guide, you’ll configure your instances to require users to authenticate before performing privileged actions.

      Step 4 — Restarting Each Member Without transitionToAuth to Enforce Authentication

      At this point, each of your MongoDB instances are configured with the transitionToAuth set to true. This means that even though you’ve enabled each server to use the keyfile you created to authenticate connections internally, they’re still able to accept non-authenticated connections.

      To change this and require each member to enforce authentication, reopen the mongod.conf file on each server:

      • sudo nano /etc/mongod.conf

      Find the security section and disable the transitionToAuth directive. You can do this by commenting the line out by prepending it with a pound sign:

      /etc/mongod.conf

      . . .
      security:
        keyFile: /home/sammy/mongo-security/keyfile.txt
        #transitionToAuth: true
      . . .
      

      After disabling the transitionToAuth directive in each instance’s configuration file, save and close each file.

      Then, restart the mongod service on each server:

      • sudo systemctl restart mongod

      Following that, each of the MongoDB instances in your replica set will require you to authenticate to perform privileged actions.

      To test this, try running a MongoDB method that works when invoked by an authenticated user that has the appropriate privileges. Try running the following command from any of your Ubuntu servers’ prompts:

      • mongo --eval 'rs.status()'

      Even though you ran this method successfully in Step 1, the rs.status() method can now only be run by a user that has been granted the clusterAdmin or clusterManager roles since you’ve enabled keyfile authentication. Regardless of whether you run this command on a server hosting the primary member or one of the secondary members, it will not work because you have not authenticated:

      Output

      . . . MongoDB server version: 4.4.4 { "operationTime" : Timestamp(1616184183, 1), "ok" : 0, "errmsg" : "command replSetGetStatus requires authentication", "code" : 13, "codeName" : "Unauthorized", "$clusterTime" : { "clusterTime" : Timestamp(1616184183, 1), "signature" : { "hash" : BinData(0,"huJUmB/lrrxpx9YfnONM4mayJwo="), "keyId" : NumberLong("6941116945081040899") } } }

      Recall that, after enabling access control, all of the cluster administration methods (including rs. methods like rs.status()) will only work when invoked by an authenticated user that has been granted the appropriate cluster management roles. If you’ve created a cluster administrator — as outlined in Step 1 — and authenticate as that user, then this method will work as expected:

      • mongo -u "ClusterAdminSammy" -p --authenticationDatabase "admin" --eval 'rs.status()'

      After entering the user’s password when prompted, you will see the rs.status() method’s output:

      Output

      . . . MongoDB server version: 4.4.4 { "set" : "shard2", "date" : ISODate("2021-03-19T20:21:45.528Z"), "myState" : 2, "term" : NumberLong(4), "syncSourceHost" : "mongo1.replset.member:27017", "syncSourceId" : 2, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, . . .

      This confirms that the replica set is enforcing authentication, and that you’re able to authenticate successfully.

      Conclusion

      By completing this tutorial, you created a keyfile with OpenSSL and then configured a MongoDB replica set to require its members to use it for internal authentication. You also created a user administrator which will allow you to manage users and roles in the future. Throughout all of this, your replica set will not have gone through any downtime and your data will have remained available to your clients and applications.

      If you’d like to learn more about MongoDB, we encourage you to check out our entire library of MongoDB content.



      Source link

      How To Install and Configure LXD on Ubuntu 20.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      A Linux container is a set of processes that is separated from the rest of the system. To the end-user, a Linux container functions as a virtual machine, but it’s much more light-weight. You don’t have the overhead of running an additional Linux kernel, and the containers don’t require any CPU hardware virtualization support. This means you can create more containers than virtual machines on the same server.

      Imagine that you have a server that should run multiple web sites for your customers. On the one hand, each web site could be a virtual host/server block of the same instance of the Apache or Nginx web server. On the other hand, when using virtual machines, you would create a separate nested virtual machine for each website. Linux containers sit somewhere between virtual hosts and virtual machines.

      LXD lets you create and manage these containers. LXD provides a hypervisor service to manage the entire life cycle of containers. In this tutorial, you’ll configure LXD and use it to run Nginx in a container. You’ll then route traffic from the internet to the container to make a sample web page accessible.

      Prerequisites

      To complete this tutorial, you’ll need the following:

      Note: Starting from Ubuntu 20.04, LXD is available officially as a snap package. This is a new package format and it has several advantages. A snap package can be installed in any Linux distribution that supports snap packages. It is suggested to use a server with at least 2GB RAM when running the LXD snap package. The following table summarizes the features of the LXD snap package:

      Feature snap package
      available LXD versions 2.0, 3.0, 4.0, 4.x
      memory requirements moderate, for snapd service. Suggested server with 2GB RAM
      upgrade considerations can defer LXD upgrade up to 60 days
      ability to upgrade from the other package format can upgrade from deb to snap

      Follow the rest of this tutorial to use LXD from the snap package in Ubuntu 20.04. If, however, you want to use the LXD deb package, see our tutorial How To Install and Use LXD on Ubuntu 18.04.

      Step 1 — Preparing Your Environment for LXD

      Before you configure and run LXD, you will prepare your server’s environment. This involves adding your sudo user to the lxd group and configuring your storage backend.

      Adding your non-root account to the lxd Unix group

      When setting up your non-root account, add them to the lxd group using the following command. The adduser command takes as arguments the user account and the Unix group in order to add the user account into the existing Unix group:

      Now apply the new membership:

      Enter your password and press ENTER.

      Finally, confirm that your user is now added to the lxd group:

      You will receive an output like this:

      Now you are ready to continue configuring LXD.

      Preparing the storage backend

      To begin, you will configure the storage backend.

      The recommended storage backend for LXD when you run it on Ubuntu is the ZFS filesystem. ZFS also works very well with DigitalOcean Block Storage. To enable ZFS support in LXD, first update your package list and then install the zfsutils-linux auxiliary package:

      • sudo apt update
      • sudo apt install -y zfsutils-linux

      We are almost ready to run the LXD initialization script.

      Before you do, you must identify and take a note of the device name for your block storage.

      To do so, use ls to check the /dev/disk/by-id/ directory:

      In this specific example, the full path of the device name is /dev/disk/by-id/scsi-0DO_Volume_volume-fra1-0:

      Output

      total 0 lrwxrwxrwx 1 root root 9 Sep 16 20:30 scsi-0DO_Volume_volume-fra1-0 -> ../../sda

      Note down the full file path for your storage device. You will use it in the following step when you configure LXD.

      Step 2 — Initializing and Configuring LXD

      LXD is available as a snap package in Ubuntu 20.04. It comes pre-installed, but you must configure it.

      First, verify that the LXD snap package is installed. The command snap list shows installed snap packages:

      Ubuntu 20.04 preinstalls LXD 4.0.3, and it is tracking the 4.0/stable channel. LXD 4.0 is supported for five years (until the year 2025). It will only receive security updates:

      Output of the "snap list" command — Listing the installed snap packages

      Name Version Rev Tracking Publisher Notes core18 20200724 1885 latest/stable canonical✓ base lxd 4.0.3 16922 4.0/stable/… canonical✓ - snapd 2.45.3.1 8790 latest/stable canonical✓ snapd

      To find more information about the LXD installed snap package, run snap info lxd. You will be able to see the available versions, including when the package was last updated.

      You will now configure LXD.

      Configuring Storage Options for LXD

      Start the LXD initialization process using the sudo lxd init command:

      First, the program will ask if you want to enable LXD clustering. For the purposes of this tutorial, press ENTER to accept the default no, or type no and then press ENTER. LXD clustering is an advanced topic that enables high availability for your LXD setup and requires at least three LXD servers running in a cluster:

      Output

      Would you like to use LXD clustering? (yes/no) [default=no]: no

      The next six prompts deal with the storage pool. Give the following responses:

      • Press ENTER to configure a new storage pool.
      • Press ENTER to accept the default storage pool name.
      • Press ENTER to accept the default zfs storage backend.
      • Press ENTER to create a new ZFS pool.
      • Type yes to use an existing block device.
      • Lastly, type the full path to the block storage device name (This is what you recorded earlier. It should be something like: /dev/disk/by-id/device_name).

      Your answers will look like the following:

      Output

      Do you want to configure a new storage pool? (yes/no) [default=yes]: yes Name of the new storage pool [default=default]: default Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: zfs Create a new ZFS pool? (yes/no) [default=yes]: yes Would you like to use an existing block device? (yes/no) [default=no]: yes Path to the existing block device: /dev/disk/by-id/scsi-0DO_Volume_volume-fra1-01

      You have now configured the storage backend for LXD. Continuing with LXD’s init script, you will now configure some networking options.

      Configuring Networking Options for LXD

      LXD now asks whether you want to connect to a MAAS (Metal As A Server) server. MAAS is software that makes a bare-metal server appear as, and be handled as if, a virtual machine.

      We are running LXD in standalone mode, therefore accept the default and answer no:

      Output

      Would you like to connect to a MAAS server? (yes/no) [default=no]: no

      You are then asked to configure a network bridge for LXD containers. This enables the following features:

      • Each container automatically gets a private IP address.
      • Each container can communicate with each other over the private network.
      • Each container can initiate connections to the internet.
      • Each container remains inaccessible from the internet by default; you cannot initiate a connection from the internet and reach a container unless you explicitly enable it. You’ll learn how to allow access to a specific container in the next step.

      When asked to create a new local network bridge, choose yes:

      Output

      Would you like to create a new local network bridge? (yes/no) [default=yes]: yes

      Then accept the default name, lxdbr0:

      Output

      What should the new bridge be called? [default=lxdbr0]: lxdbr0

      Accept the automated selection of private IP address range for the bridge:

      Output

      What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto

      Finally, LXD asks the following miscellaneous questions:

      When asked if you want to manage LXD over the network, press ENTER or answer no:

      Output

      Would you like LXD to be available over the network? (yes/no) [default=no]: no

      When asked if you want to update stale container images automatically, press ENTER or answer yes:

      Output

      Would you like stale cached images to be updated automatically? (yes/no) [default=yes] yes

      When asked if you want to view and keep the YAML configuration you just created, answer yes if you do. Otherwise, you press ENTER or answer no:

      Output

      Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no

      A script will run in the background. It is normal no to receive any output.

      You have now configured your network and storage options for LXD. Next you will create your first LXD container.

      Step 2 — Creating and Configuring an LXD Container

      Now that you have successfully configured LXD, you are ready to create and manage your first container. In LXD, you manage containers using the lxc command followed by an action, such as list, launch, start, stop and delete.

      Use lxc list to view the available installed containers:

      Since this is the first time that the lxc command communicates with the LXD hypervisor, it shows some information about how to launch a container. Finally, the command shows an empty list of containers. This is expected because we haven’t created any yet:

      Output of the "lxd list" command

      To start your first container, try: lxc launch ubuntu:18.04 +------+-------+------+------+------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+-------+------+------+------+-----------+

      Now create a container that runs Nginx. To do so, first use the lxc launch command to create and start an Ubuntu 18.04 container named webserver.

      Create the webserver container. The 18.04 in ubuntu:18.04 is a shortcut for Ubuntu 18.04. ubuntu: is the identifier for the preconfigured repository of LXD images. You could also use ubuntu:bionic for the image name:

      • lxc launch ubuntu:20.04 webserver

      Note: You can find the full list of all available Ubuntu images by running lxc image list ubuntu: and other Linux distributions by running lxc image list images:. Both ubuntu: and images: are repositories of container images. For each container image, you can get more information with the command lxc image info ubuntu:20.04.

      Because this is the first time you’ve created a container, this command downloads the container image from the internet and caches it. You’ll see this output once your new container finishes downloading:

      Output

      Creating webserver Starting webserver

      With the webserver container started, use the lxc list command to show information about it. We added --columns ns4 in order to show only the columns for name, state and IPv4 address. The default lxc list command shows three more columns: the IPv6 address, whether the container is persistent or ephemeral, and whether there are snapshots available for each container:

      The output shows a table with the name of each container, its current state, its IP address, and its type:

      Output

      +-----------+---------+------------------------------------+ | NAME | STATE | IPV4 | +-----------+---------+------------------------------------+ | webserver | RUNNING | your_webserver_container_ip (eth0) | +-----------+---------+------------------------------------+

      LXD’s DHCP server provides this IP address and in most cases it will remain the same even if the server is rebooted. However, in the following steps you will create iptables rules to forward connections from the internet to the container. Therefore, you should instruct LXD’s DHCP server to always give the same IP address to the container.

      The following set of commands will configure the container to obtain a static IP assignment. First, you will override the network configuration for the eth0 device that is inherited from the default LXD profile. This allows you to set a static IP address, which ensures proper communication of web traffic into and out of the container.

      Specifically, lxc config device is a command that performs the config action to configure a device. The first line has the sub-action override to override the device eth0 from the container webserver. The second line has the sub-action to set the ipv4.address field of the eth0 device of the webserver container to the IP address that was given by the DHCP server in the beginning.

      Run the first config command:

      • lxc config device override webserver eth0

      You will receive an output like this:

      Output

      Device eth0 overridden for webserver

      Now set the static IP:

      • lxc config device set webserver eth0 ipv4.address your_webserver_container_ip

      If the command is successful, you will receive no output.

      Restart the container:

      Now check the status of the container:

      You should see that the container is RUNNING and the IPV4 address is your static address.

      You are ready to install and configure Nginx inside the container.

      Step 3 — Configuring Nginx Inside an LXD Container

      In this step you will connect to the webserver container and configure the web server.

      Connect to the container with lxc shell command, which takes the name of the container and starts a shell inside the container:

      Once inside the container, your shell prompt will look like the following:

      This shell, even if it is a root shell, is limited to the container. Anything that you run in this shell stays in the container and cannot escape to the host server.

      Note: When getting a shell into a container, you may see a warning such as mesg: ttyname failed: No such device. This message is produced when the shell in the container tries to run the command mesg from the configuration file /root/.profile. You can safely ignore it. To avoid seeing it, you may remove the command mesg n || true from /root/.profile.

      Once inside your container, update the package list and install Nginx:

      • apt update
      • apt install nginx

      With Nginx installed, you will now edit the default Nginx web page. Specifically, you will add two lines of text so that it is clear that this site is hosted inside the webserver container.

      Using nano or your preferred editor, open the file /var/www/html/index.nginx-debian.html:

      • nano /var/www/html/index.nginx-debian.html

      Add the two highlighted phrases to the file:

      /var/www/html/index.nginx-debian.html

      <!DOCTYPE html> <html> <head> <title>Welcome to nginx on LXD container webserver!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx on LXD container webserver!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> ...

      You have edited the file in two places and specifically added the text on LXD container webserver. Save the file and exit your text editor.

      Now log out of the container:

      Once the server’s default prompt returns, use curl to test that the web server in the container is working. To do this, you’ll need the IP address of the web container, which you found using the lxc list command earlier.

      Use curl to test your web server:

      • curl http://your_webserver_container_ip

      You will receive the Nginx default HTML welcome page as output. Note that it includes your edits:

      Output

      <!DOCTYPE html> <html> <head> <title>Welcome to nginx on LXD container webserver!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx on LXD container webserver!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> ...

      The web server is working but you can only access it while on the host using the private IP. In the next step, you will route external requests to this container so the world can access your web site through the internet.

      Step 4 — Forwarding Incoming Connections to the Nginx Container Using LXD

      Now that you have configured Nginx, it’s time to connect the webserver container to the internet. To begin, you need to set up the server to forward any connections that it may receive on port 80 to the webserver container. To do this, you’ll create an iptables rule to forward network connections. You can learn more about IPTables in our tutorials, How the IPtables Firewall Works and IPtables Essentials: Common Firewall Rules and Commands.

      This iptables command requires two IP addresses: the public IP address of the server (your_server_ip) and the private IP address of the webserver container (your_webserver_container_ip), which you can obtain using the lxc list command.

      Execute this command to create a new IPtables rule:

      • PORT=80 PUBLIC_IP=your_server_ip CONTAINER_IP=your_container_ip IFACE=eth0 sudo -E bash -c 'iptables -t nat -I PREROUTING -i $IFACE -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'

      Let’s study that command:

      • -t nat specifies that we’re using the nat table for address translation.
      • -I PREROUTING specifies that we’re adding the rule to the PREROUTING chain.
      • -i $IFACE specifies the interface eth0, which is the default public network interface on the host for Droplets.
      • -p TCP says we’re using the TCP protocol.
      • -d $PUBLIC_IP specifies the destination IP address for the rule.
      • --dport $PORT: specifies the destination port (such as 80).
      • -j DNAT says that we want to perform a jump to Destination NAT (DNAT).
      • --to-destination $CONTAINER_IP:$PORT says that we want the request to go to the IP address of the specific container and the destination port.

      Note: You can reuse this command to set up forwarding rules. Reset the variables PORT, PUBLIC_IP, CONTAINER_IP and IFACE at the start of the line. Just change the highlighted values.

      Now list your IPTables rules:

      • sudo iptables -t nat -L PREROUTING

      You’ll see output like this:

      Output

      Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- anywhere your_server_ip tcp dpt:http /* forward to this container */ to:your_container_ip:80 ...

      Now test that the webserver is accessible from the internet

      Use the curl command from your local machine to test the connections:

      • curl --verbose 'http://your_server_ip'

      You’ll see the headers followed by the contents of the web page you created in the container:

      Output

      * Trying your_server_ip... * Connected to your_server_ip (your_server_ip) port 80 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.10.0 (Ubuntu) ... <!DOCTYPE html> <html> <head> <title>Welcome to nginx on LXD container webserver!</title> <style> body { ...

      This confirms that the requests are going to the container.

      Finally, you will save the firewall rule so that it reapplies after a reboot.

      To do so, first install the iptables-persistent package:

      • sudo apt install iptables-persistent

      When installing the package, the application will prompt you to save the current firewall rules. Accept and save all current rules.

      When you reboot your machine, the firewall rule will load. In addition, the Nginx service in your LXD container will automatically restart.

      You’ve successfully configured LXD. In the final step you will learn how to stop and destroy the service.

      Step 5 — Stopping and Removing Containers Using LXD

      You may decide that you want to take down the container and delete it. In this step you will stop and remove your container.

      First, stop the container:

      Use the lxc list command to verify the status:

      You will see that the container’s state reads STOPPED:

      Output

      +-----------+---------+------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-----------+---------+------+------+------------+-----------+ | webserver | STOPPED | | | PERSISTENT | 0 | +-----------+---------+------+------+------------+-----------+

      To remove the container, use lxc delete:

      Running lxc list again shows that there’s no container running:

      The command will output the following:

      +------+-------+------+------+------+-----------+
      | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
      +------+-------+------+------+------+-----------+
      

      Use the lxc help command to see additional options.

      To remove the firewall rule that routes traffic to the container, first locate the rule in the list of rules with this command, which associates a line number with each rule:

      • sudo iptables -t nat -L PREROUTING --line-numbers

      You’ll see your rule, prefixed with a line number, like this:

      Output

      Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- anywhere your_server_ip tcp dpt:http /* forward to the Nginx container */ to:your_container_ip

      Use that line number to remove the rule:

      • sudo iptables -t nat -D PREROUTING 1

      List the rules again to ensure removal:

      • sudo iptables -t nat -L PREROUTING --line-numbers

      The rule is removed:

      Output

      Chain PREROUTING (policy ACCEPT) num target prot opt source destination

      Now save the changes so that the rule doesn’t come back when you restart your server:

      • sudo netfilter-persistent save

      You can now bring up another container with your own settings and add a new firewall rule to forward traffic to it.

      Conclusion

      In this tutorial, you installed and configured LXD. You then created a website using Nginx running inside an LXD container and made it publicly available us IPtables.

      From here, you could configure more websites, each confined to its own container, and use a reverse proxy to direct traffic to the appropriate container. The tutorial How to Host Multiple Web Sites with Nginx and HAProxy Using LXD on Ubuntu 16.04 walks you through that setup.

      See the LXD reference documentation for more information on how to use LXD.

      To practice with LXD, you can try LXD online and follow the web-based tutorial.

      To get user support on LXD, visit the LXD discussion forum.



      Source link