One place for hosting & domains

      stack

      How to Create a MERN Stack on Linux


      Of all the possible technical bases for a modern web site,
      “MERN holds the leading position when it comes to popularity.” This introduction makes you familiar with the essential tools used for a plurality of all web sites worldwide.

      Before You Begin

      1. If you have not already done so, create a Linode account and Compute Instance. See our
        Getting Started with Linode and
        Creating a Compute Instance guides.

      2. Follow our
        Setting Up and Securing a Compute Instance guide to update your system. You may also wish to set the timezone, configure your hostname, create a limited user account, and harden SSH access.

      Note

      The steps in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges, see our
      Users and Groups guide.

      What is the MERN stack?

      MERN refers to MongoDB, Express.js, ReactJS, and Node.js, four software tools which cooperate to power millions of web sites worldwide. In broad terms:

      • MongoDB manages data, such as customer information, technical measurements, and event records.
      • Express.js is a web application framework for the “behaviors” of particular applications. For example, how data flows from catalog to shopping cart.
      • ReactJS is a library of user-interface components for managing the visual “state” of a web application.
      • Node.js is a back-end runtime environment for the server side of a web application.

      Linode has
      many articles on each of these topics, and supports thousands of
      Linode customers who have created successful applications based on these tools.

      One of MERN’s important distinctions is the
      JavaScript programming language is used throughout the entire stack. Certain competing stacks use PHP or Python on the back end, JavaScript on the front end, and perhaps SQL for data storage. MERN developers focus on just a single programming language,
      JavaScript, with all the economies that implies, for training and tooling.

      Install the MERN stack

      You can install a basic MERN stack on a 64-bit x86_64
      Linode Ubuntu 20.04 host in under half an hour. As of this writing, parts of MERN for Ubuntu 22.04 remain experimental. While thousands of variations are possible, this section typifies a correct “on-boarding” sequence. The emphasis here is on “correct”, as scores of already-published tutorials embed enough subtle errors to block their use by readers starting from scratch.

      Install MongoDB

      1. Update the repository cache:

        apt update -y
        
      2. Install the networking and service dependencies Mongo requires:

        apt install ca-certificates curl gnupg2 systemctl wget -y
        
      3. Configure access to the official MongoDB Community Edition repository with the MongoDB public GPG key:

        wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | apt-key add -
        
      4. Create a MongoDB list file:

        echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-5.0.list
        
      5. Update the repository cache again:

        apt update -y
        
      6. Install MongoDB itself:

        apt install mongodb-org -y
        
      7. Enable and the MongoDB service:

        systemctl enable mongod
        
      8. Launch the MongoDB service:

        systemctl start mongod
        
      9. Verify the MongoDB service:

        systemctl status mongod
        

        You should see diagnostic information that concludes:

        … Started MongoDB Database Server.
        
      10. For an even stronger confirmation that the Mongo server is ready for useful action, connect directly to it and issue this command:

        mongo
        
      11. Now issue this command:

        db.runCommand({ connectionStatus: 1 })
        

        You should see, along with many other details, this summary of the connectionStatus:

        … MongoDB server … "ok" : 1 …
        
      12. Exit Mongo:

         exit
        

      Install Node.js

      While the acronym is MERN, the true order of its dependencies is better written as “MNRE”. ReactJS and Express.js conventionally require Node.js, so the next installation steps focus on Node.js. As with MongoDB, Node.js’s main trusted repository is not available in the main Ubuntu repository.

      1. Run this command to adjoin it:

        curl -sL https://deb.nodesource.com/setup_16.x | bash -
        
      2. Install Node.js itself:

        apt-get install nodejs -y
        
      3. Verify the installation:

        node -v
        

        You should see v16.15.1 or perhaps later.

      Install React.js

      1. Next, install React.js:

        mkdir demonstration; cd demonstration
        npx --yes create-react-app frontend
        cd frontend
        npm run build
        

      Templates for all the HTML, CSS, and JS for your model application are now present in the demonstration/frontend directory.

      Install Express.js

      1. Express.js is the final component of the basic MERN stack.

        cd ..; mkdir server; cd server
        npm init -y
        cd ..
        npm install cors express mongodb mongoose nodemon
        

      Use the MERN stack to create an example application

      The essence of a web application is to respond to a request from a web browser with an appropriate result, backed by a datastore that “remembers” crucial information from one session to the next. Any realistic full-scale application involves account management, database backup, context dependence, and other refinements. Rather than risk the distraction and loss of focus these details introduce, this section illustrates the simplest possible use of MERN to implement a
      three-tier operation typical of real-world applications.

      “Three-tier” in this context refers to the teamwork web applications embody between:

      • The presentation in the web browser of the state of an application
      • The “back end” of the application which realizes that state
      • The datastore which supports the back end beyond a single session of the front end or even the restart of the back end.

      You can create a tiny application which receives a request from a web browser, creates a database record based on that request, and responds to the request. The record is visible within the Mongo datastore.

      Initial configuration of the MERN application

      1. Create demonstration/server/index.js with this content:

        const express = require('express');
        const bodyParser = require('body-parser');
        const mongoose = require('mongoose');
        const routes = require('../routes/api');
        const app = express();
        const port = 4200;
        
        // Connect to the database
        mongoose
          .connect('mongodb://127.0.0.1:27017/', { useNewUrlParser: true })
          .then(() => console.log(`Database connected successfully`))
          .catch((err) => console.log(err));
        
        // Override mongoose's deprecated Promise with Node's Promise.
        mongoose.Promise = global.Promise;
        app.use((req, res, next) => {
          res.header('Access-Control-Allow-Origin', '*');
          res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
            next();
          });
          app.use(bodyParser.json());
          app.use('/api', routes);
          app.use((err, req, res, next) => {
            console.log(err);
            next();
          });
        
          app.listen(port, () => {
            console.log(`Server runs on port ${port}.`);
          });
        
      2. Create demonstration/routes/api.js with this content:

        const express = require('express');
        const router = express.Router();
        
        var MongoClient = require('mongodb').MongoClient;
        var url="mongodb://127.0.0.1:27017/";
        const mongoose = require('mongoose');
        var db = mongoose.connection;
        
        router.get('/record', (req, res, next) => {
          item = req.query.item;
          MongoClient.connect(url, function(err, db) {
            if (err) throw err;
            var dbo = db.db("mydb");
            var myobj = { name: item };
            dbo.collection("demonstration").insertOne(myobj, function(err, res) {
              if (err) throw err;
              console.log(`One item (${item}) inserted.`);
              db.close();
            })
          });
        })
        module.exports = router;
        
      3. Create demonstration/server/server.js with this content:

        const express = require("express");
        const app = express();
        const cors = require("cors");
        require("dotenv").config({ path: "./config.env" });
        const port = process.env.PORT || 4200;
        app.use(cors());
        app.use(express.json());
        app.use(require("./routes/record"));
        const dbo = require("./db/conn");
        
        app.listen(port, () => {
          // Connect on start.
          dbo.connectToServer(function (err) {
            if (err) console.error(err);
          });
          console.log(`Server is running on port: ${port}`);
        });
        

      Verify your application

      1. Launch the application server:

        node server/index.js
        
      2. In a convenient Web browser, request:

        localhost:4200/api/record?item=this-new-item
        

        At this point, your terminal should display:

        One item (this-new-item) inserted.
        
      3. Now launch an interactive shell to connect to the MongoDB datastore:

        mongo
        
      4. Within the MongoDB shell, request:

        use mydb
        db.demonstration.find({})
        

        Mongo should report that it finds a record:

        { "_id" : ObjectId("62c84fe504d6ca2aa325c36b"), "name" : "this-new-item" }
        

      This demonstrates a minimal MERN action:

      • The web browser issues a request with particular data.
      • The React front end framework routes that request.
      • The Express application server receives the data from the request, and acts on the MongoDB datastore.

      Conclusion

      You now know how to install each of the basic components of the MERN stack on a standard Ubuntu 20.04 server, and team them together to demonstrate a possible MERN action: creation of one database record based on a browser request.

      Any real-world application involves considerably more configuration and source files. MERN enjoys abundant tooling to make the database and web connections more secure, to validate data systematically, to structure a
      complete Application Programming Interface (API), and to simplify debugging. Nearly all practical applications need to create records, update, delete, and list them. All these other refinements and extensions use the elements already present in the workflow above. You can build everything your full application needs from this starting point.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      How to Set Up TOBS, The Observability Stack, for Kubernetes Monitoring


      Introduction

      TOBS, short for The Observability Stack, is a pre-packaged distribution of monitoring tools and dashboard interfaces which can be installed into any existing Kubernetes cluster. It includes many of the most popular open-source observability tools with Prometheus and Grafana as a baseline, including Promlens, TimescaleDB, Alertmanager, and others. Together, these provide a straightforward, maintainable solution for analyzing server traffic and identifying any potential problems with a deployment up to a very large scale.

      TOBS makes use of standard Kubernetes Helm charts in order to configure and update deployments. It can be installed into any Kubernetes cluster, but it can be demonstrated more effectively if you’re running kubectl to manage your cluster from a local machine rather than a remote node. DigitalOcean’s Managed Kubernetes will provide you with a configuration like this by default.

      In this tutorial, you will install TOBS into an existing Kubernetes cluster, and learn how to update, configure, and browse its component dashboards.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Verifying your Kubernetes Configuration

      In order to install TOBS, you should first have a valid Kubernetes configuration set up with kubectl from which you can ping your worker nodes. You can test this by running kubectl get nodes:

      If kubectl is able to connect to your Kubernetes cluster and it’s up and running as expected, this command will return a list of nodes with the Ready status:

      Output

      NAME STATUS ROLES AGE VERSION pool-uqv8a47h0-ul5a7 Ready <none> 22m v1.21.5 pool-uqv8a47h0-ul5am Ready <none> 21m v1.21.5 pool-uqv8a47h0-ul5aq Ready <none> 21m v1.21.5

      If this is successful, you can move on to Step 2. If not, you should review your configuration details for any issues.

      By default, kubectl will look for a file at ~/.kube/config in order to understand your environment. In order to verify that this file exists and contains valid YAML syntax, you can run head on it to view its first several lines, i:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: …

      If the file does not exist, ensure that you are logged in as the same user that you configured Kubernetes with. ~/ paths reflect individual users’ home directories, and Kubernetes configurations are saved per-user by default.

      If you are using DigitalOcean’s Managed Kubernetes, ensure that you have run the doctl kubernetes cluster kubeconfig save command after setting up a cluster so that your local machine can authenticate to it. This will create a ~/.kube/config file:

      • doctl kubernetes cluster kubeconfig save your-cluster-name

      If you are using this machine to access multiple clusters, you should review the Kubernetes documentation on using environment variables and multiple configuration files in order to avoid conflicts. After configuring your kubectl environment, you can move on to installing TOBS in the next step.

      Step 2 — Installing TOBS and Testing Your Endpoints

      TOBS includes the following components:

      • Prometheus is a time series database and monitoring tool that works by polling metrics endpoints and scraping and processing the data exposed by these endpoints. It allows you to query this data using PromQL, a time series data query language.
      • Alertmanager, usually deployed alongside Prometheus, forms the alerting layer of the stack, handling alerts generated by Prometheus and deduplicating, grouping, and routing them to integrations like email or PagerDuty. To learn more about Alertmanager, consult the Prometheus documentation on alerting.
      • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data.
      • kube-state-metrics is an add-on agent that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects like Deployments and Pods. These metrics are served as plaintext on HTTP endpoints and consumed by Prometheus.
      • Lastly is node-exporter, a Prometheus exporter that runs on cluster nodes and provides OS and hardware metrics like CPU and memory usage to Prometheus. These metrics are also served as plaintext on HTTP endpoints and consumed by Prometheus.

      In order to install TOBS, you first need to run the TOBS installer on your control-plane. This will set up the tobs command and configuration directories. As mentioned in the prerequisites, the tobs command is only designed to work on Linux/macOS/BSD systems (like the official Kubernetes binaries), so if you have been using Windows up to now, you should be working in the Windows Subsystem for Linux environment.

      Retrieve and run the TOBS installer:

      • curl --proto '=https' --tlsv1.2 -sSLf https://tsdb.co/install-tobs-sh |sh

      Output

      tobs 0.7.0 was successfully installed 🎉 Binary is available at /root/.local/bin/tobs.

      You can now push TOBS to your Kubernetes cluster. This is done by a one-liner using your newly-provided tobs command:

      This will generate several lines of output and may take a few moments. Depending on your exact version of Kubernetes, there may be several warnings in the output, but you can ignore these as long as you eventually receive the Welcome to tobs message:

      Output

      WARNING: Using a generated self-signed certificate for TLS access to TimescaleDB. This should only be used for development and demonstration purposes. To use a signed certificate, use the "--tls-timescaledb-cert" and "--tls-timescaledb-key" flags when issuing the tobs install command. Creating TimescaleDB tobs-certificate secret Creating TimescaleDB tobs-credentials secret skipping to create TimescaleDB s3 backup secret as backup option is disabled. 2022/01/10 11:25:34 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame 2022/01/10 11:25:35 Transport: unhandled response frame type *http.http2UnknownFrame Installing The Observability Stack 2022/01/10 11:25:37 Transport: unhandled response frame type *http.http2UnknownFrame W0110 11:25:55.438728 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0110 11:25:55.646392 75479 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ … 👋🏽 Welcome to tobs, The Observability Stack for Kubernetes …

      The output from this point onward will contain instructions for connecting to each of Prometheus, TimescaleDB, PromLens, and Grafana’s web endpoints in your browser. It is reproduced in full below for reference:

      Output

      ############################################################################### 🔥 PROMETHEUS NOTES: ############################################################################### Prometheus can be accessed via port 9090 on the following DNS name from within your cluster: tobs-kube-prometheus-prometheus.default.svc.cluster.local Get the Prometheus server URL by running these commands in the same shell: tobs prometheus port-forward The Prometheus alertmanager can be accessed via port 9093 on the following DNS name from within your cluster: tobs-kube-prometheus-alertmanager.default.svc.cluster.local Get the Alertmanager URL by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=alertmanager,alertmanager=tobs-kube-prometheus-alertmanager" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9093 WARNING! Persistence is disabled on AlertManager. You will lose your data when the AlertManager pod is terminated. ############################################################################### 🐯 TIMESCALEDB NOTES: ############################################################################### TimescaleDB can be accessed via port 5432 on the following DNS name from within your cluster: tobs.default.svc.cluster.local To get your password for superuser run: tobs timescaledb get-password -U <user> To connect to your database, chose one of these options: 1. Run a postgres pod and connect using the psql cli: tobs timescaledb connect -U <user> 2. Directly execute a psql session on the master node tobs timescaledb connect -m ############################################################################### 🧐 PROMLENS NOTES: ############################################################################### PromLens is a PromQL query builder, analyzer, and visualizer. You can access PromLens via a local browser by executing: tobs promlens port-forward Then you can point your browser to http://127.0.0.1:8081/. ############################################################################### 📈 GRAFANA NOTES: ############################################################################### 1. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: tobs-grafana.default.svc.cluster.local You can access grafana locally by executing: tobs grafana port-forward Then you can point your browser to http://127.0.0.1:8080/. 2. The 'admin' user password can be retrieved by: tobs grafana get-password 3. You can reset the admin user password with grafana-cli from inside the pod. tobs grafana change-password <password-you-want-to-set>

      Each of this is provided with a DNS name internal to your cluster so that they can be accessed from any of your worker nodes, e.g. tobs-kube-prometheus-alertmanager.default.svc.cluster.local for Prometheus. In addition, there is a port forwarding command configured for each that allows you to access them from a local web browser.

      In a new terminal, run tobs prometheus port-forward:

      • tobs prometheus port-forward

      This will occupy the terminal as long as the port forwarding process is active. You can press Ctrl+C to gracefully quit a blocking process such as this one when you want to stop forwarding the port. Next, in a web browser, go to the URL http://127.0.0.1:9090/. You should see the full Prometheus interface running and producing metrics from your cluster:

      Prometheus welcome

      You can do the same for Grafana, which is accessible at http://127.0.0.1:8080/ as long as port forwarding is active in another process. First, you’ll need to use the get-password command provided by the installer output:

      • tobs grafana get-password

      Output

      your-grafana-password

      You can then use this password to log into the Grafana interface by running its port forwarding command and opening http://127.0.0.1:8080/ in your browser.

      • tobs grafana port-forward

      Grafana welcome

      You now have a working TOBS stack running in your Kubernetes cluster. You can refer to the individual components’ documentation in order to learn their respective features. In the last step of this tutorial, you’ll learn how to make updates to the TOBS configuration itself.

      Step 3 — Editing TOBS Configurations and Upgrading

      TOBS’ configuration contains some parameters for the individual applications in the stack, as well as some parameters for the TOBS deployment itself. It is generated and stored as a Kubernetes Helm chart. You can output your current configuration by running tobs helm show-values. However, this will output the entire long configuration to your terminal, which can be difficult to read. You can instead redirect the output to a file with the .yaml extension, because Helm charts are all valid YAML syntax:

      • tobs helm show-values > values.yaml

      The file contents will look like this:

      ~/values.yaml

      2022/01/10 11:56:37 Transport: unhandled response frame type *http.http2UnknownFrame
      # Values for configuring the deployment of TimescaleDB
      # The charts README is at:
      #    https://github.com/timescale/timescaledb-kubernetes/tree/master/charts/timescaledb-single
      # Check out the various configuration options (administration guide) at:
      #    https://github.com/timescale/timescaledb-kubernetes/blob/master/charts/timescaledb-single/admin-guide.md
      cli: false
      
      # Override the deployment namespace
      namespaceOverride: ""
      …
      

      You can review the additional parameters available for TOBS’ configuration by reading the TOBS documentation

      If you ever modify this file in order to update your deployment, you can re-install TOBS over itself using the updated configuration. Just pass the -f option to the tobs install command with the YAML file as an additional argument:

      • tobs install -f values.yaml

      Finally, you can upgrade TOBS with the following command:

      This performs the equivalent of a helm upgrade by fetching the newest upstream chart.

      Conclusion

      In this tutorial, you learned to deploy and configure TOBS, The Observability Stack, on an existing Kubernetes cluster. TOBS is particularly helpful because it eliminates the need to individually maintain configuration details for each of these apps, while providing standardized monitoring for the applications running on your cluster.

      Next, you might want to learn how to use Cert-Manager to handle HTTPS ingress to your Kubernetes cluster.



      Source link

      How To Build A Security Information and Event Management (SIEM) System with Suricata and the Elastic Stack on Rocky Linux 8


      Not using Rocky Linux 8?


      Choose a different version or distribution.

      Introduction

      The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.

      In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and Rocky Linux 8. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.

      The components that you will use to build your own SIEM are:

      • Elasticsearch to store, index, correlate, and search the security events that come from your Suricata server.
      • Kibana to display and navigate around the security event logs that are stored in Elasticsearch.
      • Filebeat to parse Suricata’s eve.json log file and send each event to Elasticsearch for processing.
      • Suricata to scan your network traffic for suspicious events, and either log or drop invalid packets.

      First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json logs to Elasticsearch.

      Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.

      Prerequisites

      If you have been following this tutorial series then you should already have Suricata running on a Rocky Linux server. This server will be referred to as your Suricata server.

      You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be a Rocky Linux 8 server with:

      For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.

      Step 1 — Installing Elasticsearch and Kibana

      The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:

      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

      Next, create an elasticsearch.repo file in your /etc/yum/yum.repos.d directory with the following contents, using vi or your preferred editor. This ensures that the upstream Elasticsearch repositories will be used when installing new packages via yum:

      • sudo vi /etc/yum.repos.d/elasticsearch.repo

      /etc/yum.repos.d/elasticsearch.repo

      [elasticsearch]
      name=Elasticsearch repository for 7.x packages
      baseurl=https://artifacts.elastic.co/packages/7.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md
      

      If you are using vi, when you are finished making changes, press ESC and then :x to write the changes to the file and quit.

      Now install Elasticsearch and Kibana using the dnf command. Press Y to accept any prompts about GPG key fingerprints:

      • sudo dnf install --enablerepo=elasticsearch elasticsearch kibana

      The --enablerepo option is used to override the default disabled setting in the /etc/yum.repos.d/elasticsearch.repo file. This approach ensures that the Elasticsearch and Kibana packages do not get accidentally upgraded when you install other package updates to your server.

      Once you are done installing the packages, find and record your server’s private IP address using the ip address show command:

      You will receive output like the following:

      Output

      lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64 eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64

      The private network interface in this output is the highlighted eth1 device, with the IPv4 address 10.137.0.5. Your device name, and IP addresses will be different. Regardless of your device name and private IP address, the address will be from the following reserved blocks:

      • 10.0.0.0 to 10.255.255.255 (10/8 prefix)
      • 172.16.0.0 to 172.31.255.255 (172.16/12 prefix)
      • 192.168.0.0 to 192.168.255.255 (192.168/16 prefix)

      If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)

      Record the private IP address for your Elasticsearch server (in this case 10.137.0.5). This address will be referred to as your_private_ip in the remainder of this tutorial. Also note the name of the network interface, in this case eth1. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.

      Step 2 — Configuring Elasticsearch

      Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack security module.

      Configuring Elasticsearch Networking

      Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface.

      Open the /etc/elasticsearch/elasticsearch.yml file using vi or your preferred editor:

      • sudo vi /etc/elasticsearch/elasticsearch.yml

      Find the commented out #network.host: 192.168.0.1 line between lines 50–60 and add a new line after it that configures the network.bind_host setting, as highlighted below:

      # By default Elasticsearch is only accessible on localhost. Set a different
      # address here to expose this node on the network:
      #
      #network.host: 192.168.0.1
      network.bind_host: ["127.0.0.1", "your_private_ip"]
      #
      # By default Elasticsearch listens for HTTP traffic on the first free port it
      # finds starting at 9200. Set a specific HTTP port here:
      

      Substitute your private IP in place of the your_private_ip address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.

      Next, go to the end of the file using the vi shortcut SHIFT+G.

      Add the following highlighted lines to the end of the file:

      . . .
      discovery.type: single-node
      xpack.security.enabled: true
      

      The discovery.type setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled setting turns on some of the security features that are included with Elasticsearch.

      Save and close the file when you are done editing it.

      Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using firewalld, run the following commands:

      • sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
      • sudo firewall-cmd --permanent --zone=internal --add-service=elasticsearch
      • sudo firewall-cmd --permanent --zone=internal --add-service=kibana
      • sudo systemctl reload firewalld.service

      Substitute your private network interface name in place of eth1 in the first command if yours is different. That command changes the interface rules to use the internal Firewalld zone, which is more permissive than the default public zone.

      The next commands add rules to allow Elasticsearch traffic on port 9200 and 9300, along with Kibana traffic on port 5601.

      The final command reloads the Firewalld service with the new permanent rules in place.

      Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack security module.

      Starting Elasticsearch

      Now that you have configured networking and the xpack security settings for Elasticsearch, you need to start it for the changes to take effect.

      Run the following systemctl command to start Elasticsearch:

      • sudo systemctl start elasticsearch.service

      Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.

      Configuring Elasticsearch Passwords

      Now that you have enabled the xpack.security.enabled setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin directory that can automatically generate random passwords for these users.

      Run the following command to cd to the directory and then generate random passwords for all the default users:

      • cd /usr/share/elasticsearch/bin
      • sudo ./elasticsearch-setup-passwords auto

      You will receive output like the following. When prompted to continue, press y and then RETURN or ENTER:

      Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
      The passwords will be randomly generated and printed to the console.
      Please confirm that you would like to continue [y/N]y
      
      
      Changed password for user apm_system
      PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
      
      Changed password for user kibana_system
      PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user kibana
      PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user logstash_system
      PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
      
      Changed password for user beats_system
      PASSWORD beats_system = 2p81hIdAzWKknhzA992m
      
      Changed password for user remote_monitoring_user
      PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
      
      Changed password for user elastic
      PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
      

      You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system user’s password in the next section of this tutorial, and the elastic user’s password in the Configuring Filebeat step of this tutorial.

      At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack security module.

      Step 3 — Configuring Kibana

      In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.

      First you’ll enable Kibana’s xpack security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.

      Enabling xpack.security in Kibana

      To get started with xpack security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.

      You can generate the required encryption keys using the kibana-encryption-keys utility that is included in the /usr/share/kibana/bin directory. Run the following to cd to the directory and then generate the keys:

      • cd /usr/share/kibana/bin/
      • sudo ./kibana-encryption-keys generate -q --force

      The -q flag suppresses the tool’s instructions, and the --force flag will ensure that you create new keys. You should receive output like the following:

      Output

      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585 xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b

      Copy these three keys somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml configuration file.

      Open the file using vi or your preferred editor:

      • sudo vi /etc/kibana/kibana.yml

      Go to the end of the file using the vi shortcut SHIFT+G. Paste the three xpack lines that you copied to the end of the file:

      /etc/kibana/kibana.yml

      . . .
      
      # Specifies locale to be used for all localizable strings, dates and number formats.
      # Supported languages are the following: English - en , by default , Chinese - zh-CN .
      #i18n.locale: "en"
      
      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
      xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
      xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
      

      Keep the file open and proceed to the next section where you will configure Kibana’s network settings.

      Configuring Kibana Networking

      To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost" line in /etc/kibana/kibana.yml. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:

      /etc/kibana/kibana.yml

      # Kibana is served by a back end server. This setting specifies the port to use.
      #server.port: 5601
      
      # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
      # The default is 'localhost', which usually means remote machines will not be able to connect.
      # To allow connections from remote users, set this parameter to a non-loopback address.
      #server.host: "localhost"
      server.host: "your_private_ip"
      

      Substitute your private IP in place of the your_private_ip address.

      Save and close the file when you are done editing it. Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.

      Configuring Kibana Credentials

      There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.

      We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly.

      If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username and elasticsearch.password.

      If you choose to edit the configuration file, skip the rest of the steps in this section.

      To add a secret to the keystore using the kibana-keystore utility, first cd to the the /usr/share/kibana/bin directory. Next, run the following command to set the username for Kibana:

      • cd /usr/share/kibana/bin
      • sudo ./kibana-keystore add elasticsearch.username

      You will receive a prompt like the following:

      Username Entry

      Enter value for elasticsearch.username: *************
      

      Enter kibana_system when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an * asterisk character. Press ENTER or RETURN when you are done entering the username.

      Now repeat the process, this time to save the password. Be sure to copy the password for the kibana_system user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl.

      Run the following command to set the password:

      • sudo ./kibana-keystore add elasticsearch.password

      When prompted, paste the password to avoid any transcription errors:

      Password Entry

      Enter value for elasticsearch.password: ********************
      

      Starting Kibana

      Now that you have configured networking and the xpack security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.

      Run the following systemctl command to restart Kibana:

      • sudo systemctl start kibana.service

      Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.

      Step 4 — Installing Filebeat

      Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.

      To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:

      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

      Next, create an elasticsearch.repo file in your /etc/yum/yum.repos.d directory with the following contents, using vi or your preferred editor:

      • sudo vi /etc/yum.repos.d/elasticsearch.repo

      /etc/yum.repos.d/elasticsearch.repo

      [elasticsearch]
      name=Elasticsearch repository for 7.x packages
      baseurl=https://artifacts.elastic.co/packages/7.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md
      

      When you are finished making changes save and exit the file. Now install the Filebeat package using the dnf command:

      • sudo dnf install --enablerepo=elasticsearch filebeat

      Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml configuration file using vi or your preferred editor:

      • sudo vi /etc/filebeat/filebeat.yml

      Find the Kibana section of the file around line 100. Add a line after the commented out #host: "localhost:5601" line that points to your Kibana instance’s private IP address and port:

      /etc/filebeat/filebeat.yml

      . . .
      # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
      # This requires a Kibana endpoint configuration.
      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        #host: "localhost:5601"
        host: "your_private_ip:5601"
      
      . . .
      

      This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.

      Next, find the Elasticsearch Output section of the file around line 130 and edit the hosts, username, and password settings to match the values for your Elasticsearch server:

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["your_private_ip:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        username: "elastic"
        password: "6kNbsxQGYZ2EQJiqJpgl"
      
      . . .
      

      Substitute in your Elasticsearch server’s private IP address on the hosts line. Uncomment the username field and leave it set to the elastic user. Change the password field from changeme to the password for the elastic user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.

      Save and close the file when you are done editing it. Next, enable Filebeats’ built-in Suricata module with the following command:

      • sudo filebeat modules enable suricata

      Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.

      Run the filebeat setup command. It may take a few minutes to load everything:

      Once the command finishes you should receive output like the following:

      Output

      Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling. Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/machine-learning/current/index.html It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat. Loaded machine learning job configurations Loaded Ingest pipelines

      If there are no errors, use the systemctl command to start Filebeat. It will begin sending events from Suricata’s eve.json log to Elasticsearch once it is running.

      • sudo systemctl start filebeat.service

      Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.

      Step 5 — Navigating Kibana’s SIEM Dashboards

      Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.

      Connecting to Kibana with SSH

      SSH has an option -L that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.

      On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.

      Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:

      • ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N

      The various arguments to SSH are:

      • The -L flag forwards traffic to your local system on port 5601 to the remote server.
      • The your_private_ip:5601 portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip.
      • The 203.0.113.5 address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.
      • The -N flag instructs SSH to not run a command like an interactive /bin/bash shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.

      If you would like to close the tunnel at any time, press CTRL+C.

      On Windows your terminal should resemble the following screenshot:

      Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER or RETURN.

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      On macOS and Linux your terminal will be similar to the following screenshot:

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:

      Screenshot of a Browser on Kibana's Login Page

      If your browser cannot connect to Kibana you will receive a message like the following in your terminal:

      Output

      channel 3: open failed: connect failed: No route to host

      This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.

      Log in to your Kibana server using elastic for the Username, and the password that you copied earlier in this tutorial for the user.

      Browsing Kibana SIEM Dashboards

      Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.

      In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:

      Screenshot of a Browser Using Kibana's Global Search Box to Locate Suricata Dashboards

      Click the [Filebeat Suricata] Events Overview result to visit the Kibana dashboard that shows an overview of all logged Suricata events:

      Screenshot of a Browser on Kibana's Suricata Events Dashboard

      To visit the Suricata Alerts dashboard, repeat the search or click the Alerts link that is included in the Events dashboard. Your page should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Suricata Alerts Dashboard

      If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.

      Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Security -> Network Dashboard

      You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.

      Conclusion

      In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack security module that is included with each tool.

      After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.

      Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.

      The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.



      Source link