One place for hosting & domains

      Configure

      How To Install and Configure Elasticsearch on CentOS 8


      Not using CentOS 8?


      Choose a different version or distribution.

      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Elasticsearch is a platform for the distributed search and analysis of data in real time. Its popularity is due to its ease of use, powerful features, and scalability.

      Elasticsearch supports RESTful operations. This means that you can use HTTP methods (GET, POST, PUT, DELETE, etc.) in combination with an HTTP URI (/collection/entry) to manipulate your data. The intuitive RESTful approach is both developer and user friendly, which is one of the reasons for Elasticsearch’s popularity.

      Elasticsearch is free and open-source software with a solid company behind it — Elastic. This combination makes it suitable for many use cases, from personal testing to corporate integration.

      This article will introduce you to Elasticsearch and show you how to install, configure, and start using it.

      Prerequisites

      To follow this tutorial you will need the following:

      Step 1 — Installing Java on CentOS 8

      Elasticsearch is written in the Java programming language. Your first task, then, is to install a Java Runtime Environment (JRE) on your server. You will use the native CentOS OpenJDK package for the JRE. This JRE is free, well-supported, and automatically managed through the CentOS Yum installation manager.

      Install the latest version of OpenJDK 8:

      • sudo dnf install java-1.8.0-openjdk.x86_64 -y

      Now verify your installation:

      The command will create an output like this:

      Output

      openjdk version "1.8.0_262" OpenJDK Runtime Environment (build 1.8.0_262-b10) OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)

      When you advance in using Elasticsearch and you start looking for better Java performance and compatibility, you may opt to install Oracle’s proprietary Java (Oracle JDK 8). For more information, reference our article on How To Install Java on CentOS and Fedora.

      Step 2 — Downloading and Installing Elasticsearch on CentOS 8

      You can download Elasticsearch directly from elastic.co in zip, tar.gz, deb, or rpm packages. For CentOS, it’s best to use the native rpm package, which will install everything you need to run Elasticsearch.

      At the time of this writing, the latest Elasticsearch version is 7.9.2.

      From a working directory, download the program:

      • sudo rpm -ivh https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-x86_64.rpm

      Elasticsearch will install in /usr/share/elasticsearch/, with its configuration files placed in /etc/elasticsearch and its init script added in /etc/init.d/elasticsearch.

      To make sure Elasticsearch starts and stops automatically with the server, add its init script to the default runlevels:

      • sudo systemctl daemon-reload && sudo systemctl enable elasticsearch.service

      With Elasticsearch installed, you will now configure a few important settings.

      Step 3 — Configuring Elasticsearch on CentOS 8

      Now that you have installed Elasticsearch and its Java dependency, it is time to configure Elasticsearch.

      The Elasticsearch configuration files are in the /etc/elasticsearch directory. The ones we’ll review and edit are:

      • elasticsearch.yml — Configures the Elasticsearch server settings. This is where most options are stored, which is why we are mostly interested in this file.

      • jvm.options — Provides configuration for the JVM such as memory settings.

      The first variables to customize on any Elasticsearch server are node.name and cluster.name in elasticsearch.yml. Let’s do that now.

      As their names suggest, node.name specifies the name of the server (node) and the cluster to which the latter is associated. If you don’t customize these variables, a node.name will be assigned automatically in respect to the server hostname. The cluster.name will be automatically set to the name of the default cluster.

      The cluster.name value is used by the auto-discovery feature of Elasticsearch to automatically discover and associate Elasticsearch nodes to a cluster. Thus, if you don’t change the default value, you might have unwanted nodes, found on the same network, in your cluster.

      Let’s start editing the main elasticsearch.yml configuration file.

      Open it using nano or your preferred text editor:

      • sudo nano /etc/elasticsearch/elasticsearch.yml

      Remove the # character at the beginning of the lines for node.name and cluster.name to uncomment them, and then change their values. Your first configuration changes in the /etc/elasticsearch/elasticsearch.yml file will look like this:

      /etc/elasticsearch/elasticsearch.yml

      ...
      node.name: "My First Node"
      cluster.name: mycluster1
      ...
      

      The networking settings are also found in elasticsearch.yml. By default, Elasticsearch will listen on localhost on port 9200 so that only clients from the same server can connect. You should leave these settings unchanged from a security point of view, because the open source and free edition of Elasticsearch doesn’t offer authentication features.

      Another important setting is the node.roles property. You can set this to master-eligible (simply master in the configuration), data or ingest.

      The master-eligible role is responsible for the cluster’s health and stability. In large deployments with a lot of cluster nodes, it’s recommended to have more than one dedicated node with a master role only. Typically, a dedicated master node will neither store data nor create indices. Thus, there will be no chance of being overloaded, by which the cluster health could be endangered.

      The data role defines the nodes that will store the data. Even if a data node is overloaded, the cluster health shouldn’t be affected seriously, provided there are other nodes to take the additional load.

      Lastly, the ingest role allows a node to accept and process data streams. In larger setups, there should be dedicated ingest nodes in order to avoid possible overload on the master and data nodes.

      Note: one node may have one or more roles allowing scalability, redundancy and high-availability of the Elasticsearch setup. By default, all of these roles are assigned to the node. This is suitable for a single-node Elasticsearch, as in the example scenario described in this article. Therefore, you don’t have to change the role. Still, if you want to change the role, such as dedicating a node as a master, you can do it by changing /etc/elasticsearch/elasticsearch.yml like this:

      /etc/elasticsearch/elasticsearch.yml

      ...
      node.roles: [ master ]
      ...
      

      Another setting to consider changing is path.data. This determines the path where data is stored, and the default path is /var/lib/elasticsearch. In a production environment it’s recommended that you use a dedicated partition and mount point for storing Elasticsearch data. In the best case, this dedicated partition will be a separate storage media that will provide better performance and data isolation. You can specify a different path.data path by uncommenting the path.data line and changing its value like this:

      /etc/elasticsearch/elasticsearch.yml

      ...
      path.data: /media/different_media
      ...
      

      Now that you have made all your changes, save and close elasticsearch.yml.

      You must also edit your configurations in jvm.options.

      Recall that Elasticsearch is run by a JVM, i.e. essentially it’s a Java application. So just as any Java application it has JVM settings that can be configured in the file /etc/elasticsearch/jvm.options. Two of the most important settings, especially in regards to performance, are Xms and Xmx, which define the minimum (Xms) and maximum (Xmx) memory allocation.

      By default, both are set to 1GB, but that is almost never optimal. Not only that, but if your server only has 1GB of RAM, you won’t be able to start Elasticsearch with the default settings. This is because the operating system takes at least 100MB so it will not be possible to dedicate 1GB to Elasticsearch.

      Unfortunately, there is no universal formula for calculating the memory settings. Naturally, the more memory you allocate, the better your performance, but make sure that there is enough memory left for the rest of the processes on the server. For example, if your machine has 1GB of RAM, you could set both Xms and Xmx to 512MB, thus allowing another 512MB for the rest of the processes. Note that usually both Xms and Xmx are set to the same value in order to avoid the performance penalty of the JVM garbage collection.

      If your server only has 1GB of RAM, you must edit this setting.

      Open jvm.options:

      • sudo nano /etc/elasticsearch/jvm.options

      Now change the Xms and Xmx values to 512MB:

      /etc/elasticsearch/jvm.options

      ...
      -Xms512m
      -Xmx512m
      ...
      

      Save and exit the file.

      Now start Elasticsearch for the first time:

      • sudo systemctl start elasticsearch.service

      Allow at least 10 seconds for Elasticsearch to start before you attempt to use it. Otherwise, you may get a connection error.

      Note: You should know that not all Elasticsearch settings are set and kept in configuration files. Instead, some settings are set via its API, like index.number_of_shards and index.number_of_replicas. The first determines into how many pieces (shards) the index will split. The second defines the number of replicas that will be distributed across the cluster. Having more shards improves the indexing performance, while having more replicas makes searching faster.

      Assuming that you are still exploring and testing Elasticsearch on a single node, you can play with these settings and alter them by executing the following curl command:

      • curl -XPUT -H 'Content-Type: application/json' 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{
      • "index.number_of_replicas" : "0",
      • "index.number_of_shards" : "1"
      • }'

      With Elasticsearch installed and configured, you will now secure and test the server.

      Step 4 — (Optional) Securing Elasticsearch on CentOS 8

      Elasticsearch has no built-in security and anyone who can access the HTTP API can control it. This section is not a comprehensive guide to securing Elasticsearch. Take whatever measures are necessary to prevent unauthorized access to it and the server/virtual machine on which it is running.

      By default, Elasticsearch is configured to listen only on the localhost network interface, i.e. remote connections are not possible. You should leave this setting unchanged unless you have taken one or both of the following measures:

      • You have limited the access to TCP port 9200 only to trusted hosts with iptables.
      • You have created a vpn between your trusted hosts and you are going to expose Elasticsearch on one of the vpn’s virtual interfaces.

      Only once you have done the above should you consider allowing Elasticseach to listen on other network interfaces besides localhost. Such a change might be considered when you need to connect to Elasticsearch from another host, for example.

      To change the network exposure, open the file elasticsearch.yml:

      • sudo nano /etc/elasticsearch/elasticsearch.yml

      In this file find the line that contains network.host, uncomment it by removing the # character at the beginning of the line, and then change the value to the IP address of the secured network interface. The line will look something like this:

      /etc/elasticsearch/elasticsearch.yml

      ...
      network.host: 10.0.0.1
      ...
      

      Warning: Because Elasticsearch doesn’t have any built-in security, it is very important that you do not set this to any IP address that is accessible to any servers that you do not control or trust. Do not bind Elasticsearch to a public or shared private network IP address.

      Also, for additional security you can disable scripts that are used to evaluate custom expressions. By crafting a custom malicious expression, an attacker might be able to compromise your environment.

      To disable custom expressions, add the following line at the end of the /etc/elasticsearch/elasticsearch.yml file:

      /etc/elasticsearch/elasticsearch.yml

      ...
      script.allowed_types: none
      ...
      

      For the above changes to take effect, you will have to restart Elasticsearch.

      Restart Elasticsearch now:

      • sudo service elasticsearch restart

      In this step you took some measures to secure your Elasticsearch server. Now you are ready to test the application.

      Step 5 — Testing Elasticsearch on CentOS 8

      By now, Elasticsearch should be running on port 9200. You can test this using curl, the command-line tool for client-side URL transfers.

      To test the service, make a GET request like this:

      • curl -X GET 'http://localhost:9200'

      You will see the following response:

      Output

      { "name" : "My First Node", "cluster_name" : "mycluster1", "cluster_uuid" : "R23U2F87Q_CdkEI2zGhLGw", "version" : { "number" : "7.9.2", "build_flavor" : "default", "build_type" : "rpm", "build_hash" : "d34da0ea4a966c4e49417f2da2f244e3e97b4e6e", "build_date" : "2020-09-23T00:45:33.626720Z", "build_snapshot" : false, "lucene_version" : "8.6.2", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }

      If you see a similar response, Elasticsearch is working properly. If not, recheck the installation instructions and allow some time for Elasticsearch to fully start.

      Your Elasticsearch server is now operational. In the next step you will add and retrieve some data from the application.

      Step 6 — Using Elasticsearch on CentOS 8

      In this step you will add some data to Elasticsearch and then make some manual queries.

      Use curl to add your first entry:

      • curl -H 'Content-Type: application/json' -X POST 'http://localhost:9200/tutorial/helloworld/1' -d '{ "message": "Hello World!" }'

      You will see the following output:

      Output

      {"_index":"tutorial","_type":"helloworld","_id":"1","_version":3,"result":"updated","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2,"_primary_term":4

      Using curl, you sent an HTTP POST request to the Elasticseach server. The URI of the request was /tutorial/helloworld/1. Let’s take a closer look at those parameters:

      • tutorial is the index of the data in Elasticsearch.
      • helloworld is the type.
      • 1 is the id of our entry under the above index and type.

      Note that it’s also required to set the content type of all POST requests to JSON with the argument -H 'Content-Type: application/json'. If you do not do this Elasticsearch will reject your request.

      Now retrieve your first entry using an HTTP GET request:

      • curl -X GET 'http://localhost:9200/tutorial/helloworld/1'

      The result will look like this:

      Output

      {"_index":"tutorial","_type":"helloworld","_id":"1","_version":3,"_seq_no":2,"_primary_term":4,"found":true,"_source":{ "message": "Hello World!" }}

      To modify an existing entry you can use an HTTP PUT request like this:

      • curl -H 'Content-Type: application/json' -X PUT 'localhost:9200/tutorial/helloworld/1?pretty' -d '
      • {
      • "message": "Hello People!"
      • }'

      Elasticsearch will acknowledge successful modification like this:

      Output

      { "_index" : "tutorial", "_type" : "helloworld", "_id" : "1", "_version" : 2, "result" : "updated", "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "_seq_no" : 1, "_primary_term" : 1 }

      In the above example you modified the message of the first entry to "Hello People!". With that, the version number increased to 2.

      To make the output of your GET operations more human-readable, you can also “prettify” your results by adding the pretty argument:

      • curl -X GET 'http://localhost:9200/tutorial/helloworld/1?pretty'

      Now the response will output in a more readable format:

      Output

      { "_index" : "tutorial", "_type" : "helloworld", "_id" : "1", "_version" : 2, "_seq_no" : 1, "_primary_term" : 1, "found" : true, "_source" : { "message" : "Hello People!" } }

      This is how you can add and query data in Elasticsearch. To learn about the other operations you can check the Elasticsearch API documentation.

      Conclusion

      In this tutorial you installed, configured, and began using Elasticsearch on CentOS 8. Once you are comfortable with manual queries, your next task will be to start using the service from your applications.



      Source link

      How To Configure Remote Access for MongoDB on Ubuntu 18.04


      An earlier version of this tutorial was written by Melissa Anderson.

      Introduction

      MongoDB, also known as Mongo, is an open-source document database used commonly in modern web applications. By default, it only allows connections that originate on the same server where it’s installed. If you want to manage MongoDB remotely or connect it to a separate application server, there are a few changes you’d need to make to the default configuration.

      In this tutorial, you will configure a MongoDB installation to securely allow access from a trusted remote computer. To do this, you’ll update your firewall rules to provide the remote machine access to the port on which MongoDB is listening for connections and then update its configuration file to change its IP binding setting. Then, as a final step, you’ll test that your remote machine is able to make the connection to your database successfully.

      Prerequisites

      To complete this tutorial, you’ll need:

      • A server running Ubuntu 18.04. This server should have a non-root administrative user and a firewall configured with UFW. Set this up by following our initial server setup guide for Ubuntu 18.04.
      • MongoDB installed on your server. This tutorial assumes that you have MongoDB 4.4 or newer installed. You can install this version by following our tutorial on How To Install MongoDB on Ubuntu 18.04.
      • A second computer from which you’ll access your MongoDB instance. For simplicity, this tutorial assumes that this machine is another Ubuntu 18.04 server, with a non-root administrative user and a UFW firewall configured following our initial server setup guide for Ubuntu 18.04. However, Steps 1 and 2, which describe the actual procedure for enabling remote connectivity on the database server, will work regardless of what operating system the remote machine is running.

      Lastly, while it isn’t required to complete this tutorial, we strongly recommend that you secure your MongoDB installation by creating an administrative user account for the database and enabling authentication. To do this, follow our tutorial on How To Secure MongoDB on Ubuntu 18.04.

      Step 1 — Adjusting the Firewall

      Assuming you followed the prerequisite initial server setup tutorial and enabled a UFW firewall on your server, your MongoDB installation will be inaccessible from the internet. If you intend to use MongoDB only locally with applications running on the same server, this is the recommended and secure setting. However, if you would like to be able to connect to your MongoDB server from a remote location, you have to allow incoming connections to the port where the database is listening by adding a new UFW rule.

      Start by checking which port your MongoDB installation is listening on with the lsof command. This command typically returns a list with every open file in a system, but when combined with the -i option, it lists only network-related files or data streams.

      The following command will redirect the output produced by lsof -i to a grep command that searches for a string named mongo:

      • sudo lsof -i | grep mongo

      This example output shows that the mongod process is listening for connections on its default port, 27017:

      Output

      . . . mongod 82221 mongodb 11u IPv4 913411 0t0 TCP localhost:27017 (LISTEN) . . .

      In most cases, MongoDB should only be accessed from certain trusted locations, such as another server hosting an application. One way to configure this is to run the following command on your MongoDB server, which opens up access on MongoDB’s default port while explicitly only allowing the IP address of the other trusted server.

      Run the following command, making sure to change trusted_server_ip to the IP address of the trusted remote machine you’ll use to access your MongoDB instance:

      Note: If the previous command’s output showed your installation of MongoDB is listening on a non default port, use that port number in place of 27017 in this command.

      • sudo ufw allow from trusted_server_ip to any port 27017

      In the future, if you ever want to access MongoDB from another machine, run this command again with the new machine’s IP address in place of trusted_server_ip.

      You can verify the change in firewall settings with ufw:

      The output will show that traffic to port 27017 from the remote server is now allowed:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere 27017 ALLOW trusted_server_ip OpenSSH (v6) ALLOW Anywhere (v6)

      You can find more advanced firewall settings for restricting access to services in UFW Essentials: Common Firewall Rules and Commands.

      Next, you’ll bind MongoDB to the server’s public IP address so you can access it from your remote machine.

      Step 2 — Configuring a Public bindIP

      At this point, even though the port is open, MongoDB is currently bound to 127.0.0.1, the local loopback network interface. This means that MongoDB is only able to accept connections that originate on the server where it’s installed.

      To allow remote connections, you must edit the MongoDB configuration file — /etc/mongod.conf — to additionally bind MongoDB to your server’s publicly-routable IP address. This way, your MongoDB installation will be able to listen to connections made to your MongoDB server from remote machines.

      Open the MongoDB configuration file in your preferred text editor. The following example uses nano:

      • sudo nano /etc/mongod.conf

      Find the network interfaces section, then the bindIp value:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1
      
      . . .
      

      Append a comma to this line followed by your MongoDB server’s public IP address:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1,mongodb_server_ip
      
      . . .
      

      Save and close the file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

      Then, restart MongoDB to put this change into effect:

      • sudo systemctl restart mongod

      Following that, your MongoDB installation will be able to accept remote connections from whatever machines you’ve allowed to access port 27017. As a final step, you can test whether the trusted remote server you allowed through the firewall in Step 1 can reach the MongoDB instance running on your server.

      Step 3 — Testing Remote Connectivity

      Now that you configured your MongoDB installation to listen for connections that originate on its publicly-routable IP address and granted your remote machine access through your server’s firewall to Mongo’s default port, you can test that the remote machine is able to connect.

      Note: As mentioned in the Prerequisites section, this tutorial assumes that your remote machine is another server running Ubuntu 18.04. The procedure for enabling remote connections outlined in Steps 1 and 2 should work regardless of what operating system your remote machine runs, but the testing methods described in this Step do not work universally across operating systems.

      One way to test that your trusted remote server is able to connect to the MongoDB instance is to use the nc command. nc, short for netcat, is a utility used to establish network connections with TCP or UDP. It’s useful for testing in cases like this because it allows you to specify both an IP address and a port number.

      First, log into your trusted server using SSH:

      • ssh sammy@trusted_server_ip

      Then run the following nc command, which includes the -z option. This limits nc to only scan for a listening daemon on the target server without sending it any data. Recall from the prerequisite installation tutorial that MongoDB is running as a service daemon, making this option useful for testing connectivity. It also includes the v option which increases the command’s verbosity, causing netcat to return some output which it otherwise wouldn’t.

      Run the following nc command from your trusted remote server, making sure to replace mongodb_server_ip with the IP address of the server on which you installed MongoDB:

      • nc -zv mongodb_server_ip 27017

      If the trusted server can access the MongoDB daemon, its output will indicate that the connection was successful:

      Output

      Connection to mongodb_server_ip 27017 port [tcp/*] succeeded!

      Assuming you have a compatible version of the mongo shell installed on your remote server, you can at this point connect directly to the MongoDB instance installed on the host server.

      One way to connect is with a connection string URI, like this:

      • mongo "mongodb://mongo_server_ip:27017"

      Note: If you followed the recommended How To Secure MongoDB on Ubuntu 18.04 tutorial, you will have closed off access to your database to unauthenticated users. In this case, you’d need to use a URI that specifies a valid username, like this:

      • mongo "mongodb://username@mongo_server_ip:27017"

      The shell will automatically prompt you to enter the user’s password.

      With that, you’ve confirmed that your MongoDB server can accept connections from the trusted server.

      Conclusion

      You can now access your MongoDB installation from a remote server. At this point, you can manage your Mongo database remotely from the trusted server. Alternatively, you could configure an application to run on the trusted server and use the database remotely.

      If you haven’t configured an administrative user and enabled authentication, anyone who has access to your remote server can also access your MongoDB installation. If you haven’t already done so, we strongly recommend that you follow our guide on How To Secure MongoDB on Ubuntu 18.04 to add an administrative user and lock things down further.



      Source link

      How To Configure Remote Access for MongoDB on CentOS 8


      An earlier version of this tutorial was written by Melissa Anderson.

      Introduction

      MongoDB, also known as Mongo, is an open-source document database used in many modern web applications. By default, it only allows connections that originate on the same server where it’s installed. If you want to manage MongoDB remotely or connect it to a separate application server, there are a few changes you’d need to make to the default configuration.

      In this tutorial, you will configure a MongoDB installation to securely allow access from a trusted remote computer. To do this, you’ll update your firewall rules to provide the remote machine access to the port on which MongoDB is listening for connections and then update Mongo’s configuration file to change its IP binding setting. Then, as a final step, you’ll test that your remote machine is able to make the connection to your database successfully.

      Prerequisites

      To complete this tutorial, you’ll need:

      • A server running CentOS 8. This server should have a non-root administrative user and a firewall configured with firewalld. Set this up by following our initial server setup guide for CentOS 8.
      • MongoDB installed on your server. This tutorial assumes that you have MongoDB 4.4 or newer installed. You can install this version by following our tutorial on How To Install MongoDB on CentOS 8.
      • A second computer from which you’ll access your MongoDB instance. For simplicity, this tutorial assumes that this machine is another CentOS 8 server. Like your MongoDB server, this machine should have a non-root administrative user and a firewall configured with firewalld as described in our initial server setup guide for CentOS 8. However, Steps 1 and 2, which describe the actual procedure for enabling remote connectivity on the database server, will work regardless of what operating system the remote machine is running.

      Lastly, while it isn’t required to complete this tutorial, we strongly recommend that you secure your MongoDB installation by creating an administrative user account for the database and enabling authentication. To do this, follow our tutorial on How To Secure MongoDB on CentOS 8.

      Step 1 — Adjusting the Firewall

      Assuming you followed the prerequisite initial server setup tutorial and set up firewalld on your server, your MongoDB installation will be inaccessible from the internet. If you intend to use MongoDB only locally with applications running on the same server, this is the recommended and secure setting. However, if you would like to be able to connect to your MongoDB server from a remote location, you have to allow incoming connections to the port where the database is listening by adding a new firewall rule.

      Start by checking which port your MongoDB installation is listening on with the netstat command. netstat is a command line utility that displays information about active TCP network connections.

      The following command will redirect the output produced by sudo netstat -plunt to a grep command that searches for any lines containing the string mongo:

      • sudo netstat -plunt | grep mongo

      This example output indicates that MongoDB is listening for connections to 127.0.0.1, a special loopback address that represents localhost, on its default port, 27017:

      Output

      tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 15918/mongod

      In most cases, MongoDB should only be accessed from certain trusted locations, such as another server hosting an application. One way to configure this with firewalld is to run the following firewall-cmd command on your MongoDB server, which opens up access on MongoDB’s default port while explicitly only allowing the IP address of another trusted server.

      Run the following command, making sure to change trusted_server_ip to the IP address of the trusted remote machine you’ll use to access your MongoDB instance:

      Note: If the previous command’s output indicated that your installation of MongoDB is listening on a non default port, use that port number in place of 27017 in this command.

      • sudo firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="trusted_server_ip" port protocol="tcp" port="27017" accept"

      This command permanently adds a rich rule to the firewall’s public zone. Rich rules are features in firewalld that let you have more granular control over who has access to your server through the use of a number of options. The rule provided in this command specifies that only the trusted_server_ip address should be allowed to make connections through the wall. It also specifies that it may only do so using the TCP protocol to connect to port 27017.

      If the rule was added successfully, the command will return success in the output:

      Output

      success

      Reload the firewall to put the new rule into effect:

      • sudo firewall-cmd --reload

      In the future, if you ever want to access MongoDB from another machine, run this command again with the new machine’s IP address in place of trusted_server_ip.

      You can verify the change in firewall settings by running firewall-cmd with the --list-all option:

      • sudo firewall-cmd --list-all

      The output will include the new rich rule allowing traffic to port 27017 from the remote server:

      Output

      public (active) . . . rich rules: rule family="ipv4" source address="157.230.58.94" port port="27017" protocol="tcp" accept

      You can learn more about firewalld in How To Set Up a Firewall Using firewalld on CentOS 8.

      Next, you’ll bind MongoDB to the server’s public IP address so you can access it from your remote machine.

      Step 2 — Configuring a Public bindIP

      At this point, even though the port is open, MongoDB is currently bound to 127.0.0.1, the local loopback network interface. This means that MongoDB is only able to accept connections that originate on the server where it’s installed.

      To allow remote connections, you must edit the MongoDB configuration file — /etc/mongod.conf — to additionally bind MongoDB to your server’s publicly-routable IP address. This way, your MongoDB installation will be able to listen to connections made to your MongoDB server from remote machines.

      Open the MongoDB configuration file in your preferred text editor. The following example uses nano:

      • sudo nano /etc/mongod.conf

      Find the network interfaces section, then the bindIp value:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1
      
      . . .
      

      Append a comma to this line followed by your MongoDB server’s public IP address:

      /etc/mongod.conf

      . . .
      # network interfaces
      net:
        port: 27017
        bindIp: 127.0.0.1,mongodb_server_ip
      
      . . .
      

      Save and close the file. If you used nano, do so by pressing CTRL + X, Y, then ENTER.

      Then, restart MongoDB to put this change into effect:

      • sudo systemctl restart mongod

      Following that, your MongoDB installation will be able to accept remote connections from whatever machines you’ve allowed to access port 27017. As a final step, you can test whether the trusted remote server you allowed through the firewall in Step 1 can reach the MongoDB instance running on your server.

      Step 3 — Testing Remote Connectivity

      Now that you configured your MongoDB installation to listen for connections that originate on its publicly-routable IP address and granted your remote machine access through your server’s firewall to Mongo’s default port, you can test that the remote machine is able to connect.

      Note: As mentioned in the Prerequisites section, this tutorial assumes that your remote machine is another server running CentOS 8. The procedure for enabling remote connections outlined in Steps 1 and 2 should work regardless of what operating system your remote machine runs, but the testing methods described in this step do not work universally across operating systems.

      First, log into your trusted server using SSH:

      • ssh sammy@trusted_server_ip

      One way to test that your trusted remote server is able to connect to the MongoDB instance is to use the nc command. nc, short for netcat, is a utility used to establish network connections with TCP or UDP. It’s useful for testing in cases like this because it allows you to specify both an IP address and a port number.

      If you haven’t already, you may need to install nc. The version from the official CentOS repositories is actually an implementation called ncat, which was written by the Nmap Project as an update for netcat.

      Install ncat by typing:

      Press y and then ENTER when prompted to confirm that you want to install the package.

      Then run the following nc command, which includes the -z option. This limits nc to only scan for a listening daemon on the target server without sending it any data. Recall from the prerequisite installation tutorial that MongoDB is running as a service daemon, making this option useful for testing connectivity. It also includes the v option which increases the command’s verbosity, causing ncat to return some output which it otherwise wouldn’t.

      Run the following nc command from your trusted remote server, making sure to replace mongodb_server_ip with the IP address of the server on which you installed MongoDB:

      • nc -zv mongodb_server_ip 27017

      If the trusted server can access the MongoDB daemon, its output will indicate that it made a connection:

      Output

      Ncat: Version 7.70 ( https://nmap.org/ncat ) Ncat: Connected to mongodb_server_ip:27017. Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds.

      Assuming you have a compatible version of the mongo shell installed on your remote server, you can at this point connect directly to the MongoDB instance installed on the host server.

      One way to connect is with a connection string URI, like this:

      • mongo "mongodb://mongo_server_ip:27017"

      Note: If you followed the recommended How To Secure MongoDB on CentOS 8 tutorial, you will have closed off access to your database to unauthenticated users. In this case, you’d need to use a URI that specifies a valid username, like this:

      • mongo "mongodb://username@mongo_server_ip:27017"

      The shell will automatically prompt you to enter the user’s password.

      With that, you’ve confirmed that your MongoDB server can accept connections from the trusted server.

      Conclusion

      You can now access your MongoDB installation from a remote server. At this point, you can manage your Mongo database remotely from the trusted server. Alternatively, you could configure an application to run on the trusted server and use the database remotely.

      If you haven’t configured an administrative user and enabled authentication, anyone who has access to your remote server can also access your MongoDB installation. If you haven’t already done so, we strongly recommend that you follow our guide on How To Secure MongoDB on CentOS 8 to add an administrative user and lock things down further.



      Source link