One place for hosting & domains

      Management

      Secrets Management with Ansible


      Ansible stands out for its capabilities in automating server provisioning and management. Ansible’s playbooks, its ability to group and organize resources, and numerous other features make it a great asset for administering servers.

      However, Ansible’s operations often necessitate that your playbooks leverage secrets like server passwords, access tokens, and API keys.

      To bring security to the convenience of your Ansible setup, you should use a secrets management process. Secrets management continues to let Ansible automate your server tasks, with all the access it needs. At the same time, secrets management keeps your secrets safely out of plain text files and other vulnerable locations.

      In this tutorial, learn the most useful methods for implementing secrets management with your Ansible setup. The tutorial covers a range of methods, from simple to scalable, and helps you choose the right fit.

      Before You Begin

      1. If you have not already done so, create a Linode account. See our Getting Started with Linode guide.

      2. Follow our guide on Getting Started With Ansible: Basic Installation and Setup. Specifically, follow the sections on setting up a control node and managed nodes, configuring Ansible, and creating an Ansible inventory.

      3. Refer to our guide Automate Server Configuration with Ansible Playbooks for an overview of Ansible playbooks and their operations.

      Secrets in Ansible

      A secret refers to a key or other credential that allows access to a resource or system. Secrets include things like access tokens, API keys, and database & system passwords.

      When managing nodes with Ansible, you often need to provide it with secrets. Typically, you can provide these secrets within Ansible playbooks, but doing so exposes them to possible interception and exploitation.

      To secure your secrets, you should implement secrets management with your Ansible playbooks. Secrets management refers to the ways in which secrets are stored safely, with different methods balancing between accessibility and security.

      Managing Secrets in Ansible

      Several options exist for managing secrets with your Ansible playbooks. The option that fits your needs depends on your particular setup. How accessible you need your secrets to be and how secure you want to make them determine which solutions work best for you.

      The upcoming sections outline some of the most useful options for managing secrets with Ansible. These attempt to cover a range of use cases, from interactive and manual, to automated and integrated.

      All of the examples that follow use an Ansible setup with one control node and two managed nodes. The managed nodes are given the example IP addresses 192.0.2.1 and 192.0.2.2 throughout, and are listed in an ansiblenodes group in the control node’s Ansible inventory.

      Using Prompts to Manually Enter Secrets

      Ansible playbooks include the option to prompt users for variables. This is actually an option for managing secrets within your Ansible setup.

      With this option, you configure your Ansible playbook to prompt users to manually input secrets. The secrets never need to be persisted on the system, allowing you to safeguard them otherwise. This method is the easiest of the options covered here.

      Of course, this option comes with some significant drawbacks. By not storing the secrets, you also prevent Ansible from accessing them automatically, reducing the ability to integrate your playbooks into automated processes. Additionally, leaving the secrets to manual entry introduces its own risks, as users can mishandle secrets.

      Here is an example Ansible playbook from our Automate Server Configuration with Ansible Playbooks guide. This playbook adds a new non-root user to the managed nodes.

      The playbook uses the vars_prompt option to prompt the user to input a password for the new user. Ansible then hashes the password and deploys the new user to each of the managed nodes.

      Note

      This playbook assumes you have an SSH public key on your control node. The public key allows for secure passwordless connections to the new user in the future. Learn more in our guide Using SSH Public Key Authentication.

      This tutorial also assumes that your control node’s SSH key is secured by a password, and hence uses the --ask-pass option in some of the Ansible playbook commands below. If your SSH key is not secured by a password, remove the --ask-pass option from the Ansible playbook commands shown in this tutorial.

      File: add_limited_user.yml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      ---
      - hosts: ansiblenodes
        remote_user: root
        vars:
          limited_user_name: 'example-user'
        vars_prompt:
          - name: limited_user_password
            prompt: Enter a password for the new non-root user
        tasks:
          - name: "Create a non-root user"
            user: name={{ limited_user_name }}
                  password={{ limited_user_password | password_hash }}
                  shell=/bin/bash
          - name: Add an authorized key for passwordless logins
            authorized_key: user={{ limited_user_name }} key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
          - name: Add the new user to the sudoers list
            lineinfile: dest=/etc/sudoers
                        regexp="{{ limited_user_name }} ALL"
                        line="{{ limited_user_name }} ALL=(ALL) ALL"
                        state=present

      To run the playbook, first make sure you’re in the same directory as the playbook, then execute the following command:

      Ansible Control Node

      ansible-playbook --ask-pass add_limited_user.yml

      Ansible prompts for the SSH password first, then for a password for the new user. The output should resemble what is shown below:

      SSH password:
      Enter a password for the new non-root user:
      
      PLAY [ansiblenodes] ************************************************************
      
      TASK [Gathering Facts] *********************************************************
      ok: [192.0.2.2]
      ok: [192.0.2.1]
      
      TASK [Create a non-root user] **************************************************
      changed: [192.0.2.1]
      changed: [192.0.2.2]
      
      TASK [Add remote authorized key to allow future passwordless logins] ***********
      ok: [192.0.2.1]
      ok: [192.0.2.2]
      
      TASK [Add normal user to sudoers] **********************************************
      ok: [192.0.2.1]
      ok: [192.0.2.2]
      
      PLAY RECAP *********************************************************************
      192.0.2.1              : ok=4    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
      192.0.2.2              : ok=4    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

      Using the Ansible Vault to Manage Secrets

      Ansible has a tool, Ansible Vault, that can facilitate secrets management. The Vault encrypts information, which you can then use within your Ansible playbooks.

      With some setup, Ansible Vault can make secrets both secure and accessible. Secrets are encrypted, meaning that no one can get to them without your password. The secrets are, at the same time, made accessible to Ansible. A password file can give Ansible everything it needs to run in an automated setup.

      The vault password can either be entered manually or automatically through a password file. You can even use an external password manager, and implement a script or other solution to retrieve the password.

      This example of Ansible Vault deploys rclone to the managed nodes and configures it to connect to a Linode Object Storage instance. The secrets are the access keys for the object storage instance.

      To follow along, you need to set up a Linode Object Storage instance with access keys and at least one bucket. You can learn how to do so in our guide Object Storage – Get Started.

      1. Create a file with the access keys for your Linode Object Storage instance. You can do so with the following command, just replace the text in arrow brackets with your corresponding object storage keys:

        Ansible Control Node

        echo "s3_access_token: <S3_ACCESS_TOKEN>" > s3_secrets.enc
        echo "s3_secret_token: <S3_SECRET_TOKEN>" >> s3_secrets.enc
        ansible-vault encrypt s3_secrets.enc

        Ansible Vault prompts you to create a vault password before encrypting the file’s contents.

        New Vault password:
        Confirm New Vault password:
        Encryption successful
      2. Create a password file in the same directory you intend to create the Ansible playbook in. The file needs to contain only the password for your encrypted secrets file. The example in this next command assumes your password is examplepassword:

        Ansible Control Node

        echo "examplepassword" > example.pwd
      3. Create a new Ansible playbook with the following contents. This playbook connects to the non-root users created using the playbook in the previous section of this tutorial. The playbook then installs rclone and creates a configuration file for it. The playbook also inserts the access keys from the s3_secrets.enc file into the configuration file.

        File: set_up_rclone.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        
        ---
        - hosts: ansiblenodes
          remote_user: 'example-user'
          become: yes
          become_method: sudo
          vars:
            s3_region: 'us-southeast-1'
          tasks:
            - name: "Install rclone"
              apt:
                pkg:
                  - rclone
                state: present
                update_cache: yes
            - name: "Create the directory for the rclone configuration"
              file:
                path: "/home/example-user/.config/rclone"
                state: directory
            - name: "Create the rclone configuration file"
              copy:
                dest: "/home/example-user/.config/rclone/rclone.conf"
                content: |
                  [linodes3]
                  type = s3
                  env_auth = false
                  acl = private
                  access_key_id = {{ s3_access_token }}
                  secret_access_key = {{ s3_secret_token }}
                  region = {{ s3_region }}
                  endpoint = {{ s3_region }}.linodeobjects.com          
      4. Run the Ansible playbook. The playbook command here adds the variables from the secrets file using the -e option, and gets the password for decrypting them from the --vault-password-file. The --ask-become-pass option has Ansible prompt for the limited user’s sudo password.

        Ansible Control Node

        ansible-playbook -e @s3_secrets.enc --vault-password-file example.pwd --ask-pass --ask-become-pass set_up_rclone.yml

        The result should resemble:

        SSH password:
        BECOME password[defaults to SSH password]:
        
        PLAY [ansiblenodes] ************************************************************
        
        TASK [Gathering Facts] *********************************************************
        ok: [192.0.2.2]
        ok: [192.0.2.1]
        
        TASK [Install rclone] **********************************************************
        changed: [192.0.2.1]
        changed: [192.0.2.2]
        
        TASK [Create the directory for the rclone configuration] ***********************
        changed: [192.0.2.2]
        changed: [192.0.2.1]
        
        TASK [Create the rclone configuration file] ************************************
        changed: [192.0.2.2]
        changed: [192.0.2.1]
        
        PLAY RECAP *********************************************************************
        192.0.2.1              : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
        192.0.2.2              : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
      5. To verify that everything is working as expected, log into either of the managed nodes as the non-root user. Then use the following command to list the buckets on your Linode Object Storage instance:

        Ansible Managed Node

        You should see something like the following for each bucket, where ansible-test-bucket is the name of the bucket:

        -1 2022-12-08 00:00:00        -1 ansible-test-bucket

      Using a Secrets Manager

      Dedicated solutions exist for managing secrets, and many password managers are capable of doing so for your Ansible playbooks. In terms of their underlying methods, many of these tools function similarly to Ansible Vault. Despite being external tools, several are supported by official or community plugins for Ansible.

      The primary advantage of an external secrets management solution is using a tool already adopted more widely among your team or organization. Ansible Vault may offer a default integration with Ansible, but you are not likely using it more widely for password management within your organization.

      One of the more popular solutions for secret management is HashiCorp’s Vault. HashiCorp’s Vault is a centralized secrets management system with a dynamic infrastructure to keep passwords, keys, and other secrets secure.

      Ansible maintains a plugin for interacting with HashiCorp’s Vault, the hashi_vault plugin.

      The following steps walk you through an example using HashiCorp’s Vault with Ansible. The example accomplishes the same ends as the example in the previous section, so you can more easily compare the two.

      1. Follow along with our guide on Setting Up and Using a Vault Server. By the end, you should have HashiCorp’s Vault installed, a vault server running and unsealed, and be logged into the vault.

      2. Ensure that the key-value (kv) engine is enabled for the secret path:

        Vault Server

        vault secrets enable -path=secret/ kv
        Success! Enabled the kv secrets engine at: secret/
      3. Add the access keys for your Linode Object Storage instance to the secret/s3 path in the vault. Replace the text in arrow brackets below with your corresponding keys:

        Vault Server

        vault kv put secret/s3 s3_access_token=<S3_ACCESS_TOKEN> s3_secret_token=<S3_SECRET_TOKEN>
        Success! Data written to: secret/s3
      4. On your Ansible control node, install hvac via pip in order to use the hashi_vault plugin referenced in the Ansible playbook below.

        Ansible Control Node

      5. Create a new Ansible playbook with the contents shown below. This parallels the playbook built in the previous section, which installs and configures rclone to connect to a Linode Object Storage instance. This version simply fetches the secrets from a HashiCorp vault instead of an Ansible vault:

        Replace both instances of <HASHI_VAULT_IP> below with the IP address for your HashiCorp Vault server. Similarly, replace both instances of <HASHI_VAULT_TOKEN> with your login token for the HashiCorp Vault server.

        File: another_rclone_setup.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        
        ---
        - hosts: ansiblenodes
          remote_user: 'example-user'
          become: yes
          become_method: sudo
          vars:
            s3_region: 'us-southeast-1'
          tasks:
            - name: "Install rclone"
              apt:
                pkg:
                  - rclone
                state: present
                update_cache: yes
            - name: "Create the directory for the rclone configuration"
              file:
                path: "/home/example-user/.config/rclone"
                state: directory
            - name: "Create the rclone configuration file"
              copy:
                dest: "/home/example-user/.config/rclone/rclone.conf"
                content: |
                  [linodes3]
                  type = s3
                  env_auth = false
                  acl = private
                  access_key_id = {{ lookup('hashi_vault', 'secret=secret/s3:s3_access_token token=<HASHI_VAULT_TOKEN> url=http://<HASHI_VAULT_IP>:8200')}}
                  secret_access_key = {{ lookup('hashi_vault', 'secret=secret/s3:s3_secret_token token=<HASHI_VAULT_TOKEN> url=http://<HASHI_VAULT_IP>:8200')}}
                  region = {{ s3_region }}
                  endpoint = {{ s3_region }}.linodeobjects.com          
      6. Run the Ansible playbook, providing the appropriate passwords when prompted:

        Ansible Control Node

        ansible-playbook --ask-pass --ask-become-pass another_rclone_setup.yml

        The result should resemble:

        SSH password:
        BECOME password[defaults to SSH password]:
        
        PLAY [ansiblenodes] ********************************************************
        
        TASK [Gathering Facts] *****************************************************
        ok: [192.0.2.2]
        ok: [192.0.2.1]
        
        TASK [Install rclone] ******************************************************
        changed: [192.0.2.2]
        changed: [192.0.2.1]
        
        TASK [Create the directory for the rclone configuration] *******************
        changed: [192.0.2.2]
        changed: [192.0.2.1]
        
        TASK [Create the rclone configuration file] ********************************
        changed: [192.0.2.1]
        changed: [192.0.2.2]
        
        PLAY RECAP *****************************************************************
        192.0.2.1              : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
        192.0.2.2              : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
      7. Just like the previous section, you can verify the setup by logging into one of the managed nodes and running an rclone ls command, such as rclone lsd linodes3:.

      Conclusion

      You now have some options to ensure that your Ansible setup has secure secrets. Choosing between these options comes down to scale and accessibility. Manual entry is simple to start with, but only suits smaller projects and teams. Ansible Vault is in many ways ideal, but an external solution may better fit your team and organization.

      To keep learning about Ansible and efficiently automating your server tasks, read more of our guides on Ansible.

      More Information

      You may wish to consult the following resources for additional information
      on this topic. While these are provided in the hope that they will be
      useful, please note that we cannot vouch for the accuracy or timeliness of
      externally hosted materials.



      Source link

      SQL Security and User Management


      User management and permissions are essential to SQL database security. Typically, SQL database security schemes consist of one or more users, their authentication, and permissions. The database engine validates a user’s permissions when they attempt to perform an operation against a SQL object —for example, a table, an index, a stored procedure, etc. The basic premise behind the assignment of SQL roles and permissions is to provide users of the database access to only what is necessary to perform their job. In this guide, you learn how to create and assign roles and permissions to users of relational database systems.

      Users and Groups

      In order to grant access rights and permissions, a relational database management system requires user identities.
      These rights and permissions can be assigned to either an individual user, or a group of users. If you have more than one user with similar access requirements and restrictions, you can define a group. Then, you add the collective set of users as members of the appropriate group. In this way, the authentication and validation process for a given SQL object is applied against the group instead of the user. This assumes that no restrictions have been established for individual users. In the case where a user and the user’s group both have access restrictions on a given SQL object, the database applies the most restrictive access rights of either the user or the user’s group.

      Roles

      Users of relational database systems are typically assigned roles. Different users might need to perform different tasks on the same database. For example, one user might be in charge of data entry, another user might be the database administrator, and an end-user may only need to retrieve data from the database. Typically, users that have the same type of role in an organization require the same type of database access. Each database role can have its own data access permission levels. Once the role is created and the appropriate permissions are applied, you can add individual users to that role. All users assigned to a particular role inherit its permissions.

      Permissions

      There are two different types of permissions that can be assigned to roles, users, and groups: statement permissions and object permissions. Statement permissions grant access to execute specific statements against a database. For example, a user could be granted access to create a stored procedure, but not be granted the right to create tables. Object permissions, on the other hand, grant the user the right to access a database object such as a table, a view, or to execute a stored procedure.

      Implementation of Users, Groups, Roles, and Permissions

      When it comes to the management of users, groups, roles, and permissions, the concepts stated in the previous sections are quite uniform across SQL-based database management systems. What may differ are the names of commands and the syntax used by different SQL database implementations.

      Note

      The examples below use Microsoft SQL Server syntax. All commands should be executed from the command line. The examples also assume that all server security hardening has already been implemented.

      To demonstrate SQL security principles, this guide uses an example database that is used by a school. The school’s database has tables for students and courses taken by each student. The definition of the Student table contains columns for the student’s SSNumber, Firstname, and Lastname, and the definition of the CourseTaken table contains columns for SSNumber, CourseId, NumericGrade, and YearTaken.

      The example further assumes that four employees in the school administer the school database. Their respective roles are defined as follows:

      NameDatabase Role
      TomDatabase Administrator
      JohnDatabase Data Entry
      MaryDatabase Query and Reports
      JoanDatabase Query and Reports

      In the example below, assume that Tom, the database administrator (DBA), has created the school database via the CREATE DATABASE command:

      CREATE DATABASE School;
      

      Next, Tom creates database user login definitions for all four employees (including themselves) via the CREATE USER command:

      Use School;
      CREATE USER Tom WITH PASSWORD = 'Tompassword';
      CREATE USER John WITH PASSWORD = 'Johnpassword';
      CREATE USER Mary WITH PASSWORD = 'Marypassword';
      CREATE USER Joan WITH PASSWORD = 'Joanpassword';
      
      CREATE USER Tom IDENTIFIED BY 'TomPassword';
      

      After creating user login definitions, Tom creates generic roles that will later be assigned to each employee, by using the CREATE ROLE command:

      USE School;
      CREATE ROLE DBAdmin;
      CREATE ROLE DataEntry;
      CREATE ROLE QueryReports;
      

      Now that the roles exist, Tom assigns the roles to the appropriate users with the ALTER ROLE command as follows:

      USE School
      ALTER ROLE DBAdmin ADD MEMBER Tom;
      ALTER ROLE DataEntry ADD MEMBER John;
      ALTER ROLE QueryReports ADD MEMBER Mary;
      ALTER ROLE QueryReports ADD MEMBER Joan;
      

      The workflow demonstrated in this section reflects the user management steps a DBA might need to take when configuring a newly created database.

      Granting Permissions

      The GRANT statement is used to assign permissions to a user or to a role. You can also use the GRANT statement to assign specific statement permissions to a user or to a role. Some of the statement permissions that can be granted are: CREATE DATABASE, CREATE DEFAULT, CREATE PROCEDURE, CREATE RULE, CREATE TABLE, CREATE VIEW, DUMP DATABASE, and DUMP TRANSACTION.

      For example, to grant the CREATE PROCEDURE statement permission to a user or a role, use the following command:

      GRANT CREATE PROCEDURE TO <User or Role>;
      

      Continuing along with this guide’s school database example, you can grant various permissions to the database roles you created in the previous section. Tom first grants required privileges to the DBAdmin Role (Tom’s role), via the GRANT command, as follows:

      USE School;
      GRANT CREATE DATABASE TO DBAdmin;
      GRANT CREATE RULE TO DBAdmin;
      GRANT CREATE TABLE TO DBAdmin;
      GRANT CREATE VIEW TO DBAdmin;
      GRANT DUMP DATABASE TO DBAdmin;
      GRANT DUMP TRANSACTION TO DBAdmin;
      

      Now, Tom can create the two tables in the school’s database as follows:

      USE School;
      CREATE TABLE Student (
        SSNumber CHAR(9) NOT NULL,
        LastName VARCHAR(30) NOT NULL,
        FirstName VARCHAR(20) NOT NULL
      );
      
      CREATE TABLE CourseTaken (
        SSNumber CHAR(9) NOT NULL,
        CourseId CHAR(6) NOT NULL,
        NumericGrade TINYINT NOT NULL,
        YearTaken SMALLINT NOT NULL
      );
      

      Tom grants necessary database entry permissions (INSERT, UPDATE, DELETE) on both database tables, to employee John (DBEntry role), as follows:

      USE School;
      GRANT INSERT, UPDATE, DELETE ON Student TO DBEntry;
      GRANT INSERT, UPDATE, DELETE ON CourseTaken TO DBEntry;
      

      Note

      After executing the above GRANT commands, John is permitted to INSERT, UPDATE, and DELETE data in the two database tables, but is not permitted to read (SELECT) from it.

      Tom grants necessary database read permission (SELECT) on both database tables, to employees Mary and Joan, via the QueryReports role, as follows:

      USE School;
      GRANT SELECT ON Student TO QueryReports;
      GRANT SELECT ON CourseTaken TO QueryReports;
      

      Note

      After executing the above GRANT commands, Mary and Joan can only read the database tables (via the SELECT statement), but cannot manipulate the data (via the INSERT, UPDATE, or DELETE statements).

      Revoking Permissions

      Revoking permissions is the converse of granting permissions on database objects. You can revoke permissions from a table, view, table-valued function, stored procedure, and many other types of database objects.

      Continuing with the school database example, assume that John switches his role at the school from performing data entry to querying reports. Due to this change, John should no longer have the ability to manipulate data (INSERT, UPDATE, DELETE) in the school tables. John should also be granted the ability to read data from the table (via SELECT). Tom, the database administrator, needs to execute the following commands to revoke and grant the appropriate permissions to John:

      USE School;
      REVOKE INSERT, UPDATE, DELETE ON Students FROM John;
      REVOKE INSERT, UPDATE, DELETE ON CourseTaken FROM John;
      GRANT SELECT ON Student TO John;
      GRANT SELECT ON CourseTaken TO John;
      

      Alternatively, a simpler approach is to remove John from the DBEntry role and add him to the QueryReports role:

      USE School;
      ALTER ROLE DBEntry DROP MEMBER John;
      ALTER ROLE QueryReports ADD MEMBER John;
      

      Conclusion

      User management, permissions, and roles are essential to SQL database security. Create a new group and add users to that group if they require the same database access and permissions. To control access by the tasks users should be allowed to perform against a database, use database roles.

      In SQL databases, every action must pass through a validity check that determines if the database action can be completed by a particular user. The appropriate permissions are required to access SQL database objects and execute statements. The integrity of a SQL database relies on secure and well-designed user management.

      Now that you are familiar with SQL user management, you can learn about some different aspects of the SQL language, like
      joins
      ,
      data types
      , and
      grouping and totaling
      .



      Source link

      How To Build A Security Information and Event Management (SIEM) System with Suricata and the Elastic Stack on Rocky Linux 8


      Not using Rocky Linux 8?


      Choose a different version or distribution.

      Introduction

      The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.

      In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and Rocky Linux 8. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.

      The components that you will use to build your own SIEM are:

      • Elasticsearch to store, index, correlate, and search the security events that come from your Suricata server.
      • Kibana to display and navigate around the security event logs that are stored in Elasticsearch.
      • Filebeat to parse Suricata’s eve.json log file and send each event to Elasticsearch for processing.
      • Suricata to scan your network traffic for suspicious events, and either log or drop invalid packets.

      First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json logs to Elasticsearch.

      Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.

      Prerequisites

      If you have been following this tutorial series then you should already have Suricata running on a Rocky Linux server. This server will be referred to as your Suricata server.

      You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be a Rocky Linux 8 server with:

      For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.

      Step 1 — Installing Elasticsearch and Kibana

      The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:

      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

      Next, create an elasticsearch.repo file in your /etc/yum/yum.repos.d directory with the following contents, using vi or your preferred editor. This ensures that the upstream Elasticsearch repositories will be used when installing new packages via yum:

      • sudo vi /etc/yum.repos.d/elasticsearch.repo

      /etc/yum.repos.d/elasticsearch.repo

      [elasticsearch]
      name=Elasticsearch repository for 7.x packages
      baseurl=https://artifacts.elastic.co/packages/7.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md
      

      If you are using vi, when you are finished making changes, press ESC and then :x to write the changes to the file and quit.

      Now install Elasticsearch and Kibana using the dnf command. Press Y to accept any prompts about GPG key fingerprints:

      • sudo dnf install --enablerepo=elasticsearch elasticsearch kibana

      The --enablerepo option is used to override the default disabled setting in the /etc/yum.repos.d/elasticsearch.repo file. This approach ensures that the Elasticsearch and Kibana packages do not get accidentally upgraded when you install other package updates to your server.

      Once you are done installing the packages, find and record your server’s private IP address using the ip address show command:

      You will receive output like the following:

      Output

      lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64 eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64

      The private network interface in this output is the highlighted eth1 device, with the IPv4 address 10.137.0.5. Your device name, and IP addresses will be different. Regardless of your device name and private IP address, the address will be from the following reserved blocks:

      • 10.0.0.0 to 10.255.255.255 (10/8 prefix)
      • 172.16.0.0 to 172.31.255.255 (172.16/12 prefix)
      • 192.168.0.0 to 192.168.255.255 (192.168/16 prefix)

      If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)

      Record the private IP address for your Elasticsearch server (in this case 10.137.0.5). This address will be referred to as your_private_ip in the remainder of this tutorial. Also note the name of the network interface, in this case eth1. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.

      Step 2 — Configuring Elasticsearch

      Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack security module.

      Configuring Elasticsearch Networking

      Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface.

      Open the /etc/elasticsearch/elasticsearch.yml file using vi or your preferred editor:

      • sudo vi /etc/elasticsearch/elasticsearch.yml

      Find the commented out #network.host: 192.168.0.1 line between lines 50–60 and add a new line after it that configures the network.bind_host setting, as highlighted below:

      # By default Elasticsearch is only accessible on localhost. Set a different
      # address here to expose this node on the network:
      #
      #network.host: 192.168.0.1
      network.bind_host: ["127.0.0.1", "your_private_ip"]
      #
      # By default Elasticsearch listens for HTTP traffic on the first free port it
      # finds starting at 9200. Set a specific HTTP port here:
      

      Substitute your private IP in place of the your_private_ip address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.

      Next, go to the end of the file using the vi shortcut SHIFT+G.

      Add the following highlighted lines to the end of the file:

      . . .
      discovery.type: single-node
      xpack.security.enabled: true
      

      The discovery.type setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled setting turns on some of the security features that are included with Elasticsearch.

      Save and close the file when you are done editing it.

      Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using firewalld, run the following commands:

      • sudo firewall-cmd --permanent --zone=internal --change-interface=eth1
      • sudo firewall-cmd --permanent --zone=internal --add-service=elasticsearch
      • sudo firewall-cmd --permanent --zone=internal --add-service=kibana
      • sudo systemctl reload firewalld.service

      Substitute your private network interface name in place of eth1 in the first command if yours is different. That command changes the interface rules to use the internal Firewalld zone, which is more permissive than the default public zone.

      The next commands add rules to allow Elasticsearch traffic on port 9200 and 9300, along with Kibana traffic on port 5601.

      The final command reloads the Firewalld service with the new permanent rules in place.

      Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack security module.

      Starting Elasticsearch

      Now that you have configured networking and the xpack security settings for Elasticsearch, you need to start it for the changes to take effect.

      Run the following systemctl command to start Elasticsearch:

      • sudo systemctl start elasticsearch.service

      Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.

      Configuring Elasticsearch Passwords

      Now that you have enabled the xpack.security.enabled setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin directory that can automatically generate random passwords for these users.

      Run the following command to cd to the directory and then generate random passwords for all the default users:

      • cd /usr/share/elasticsearch/bin
      • sudo ./elasticsearch-setup-passwords auto

      You will receive output like the following. When prompted to continue, press y and then RETURN or ENTER:

      Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
      The passwords will be randomly generated and printed to the console.
      Please confirm that you would like to continue [y/N]y
      
      
      Changed password for user apm_system
      PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
      
      Changed password for user kibana_system
      PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user kibana
      PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user logstash_system
      PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
      
      Changed password for user beats_system
      PASSWORD beats_system = 2p81hIdAzWKknhzA992m
      
      Changed password for user remote_monitoring_user
      PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
      
      Changed password for user elastic
      PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
      

      You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system user’s password in the next section of this tutorial, and the elastic user’s password in the Configuring Filebeat step of this tutorial.

      At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack security module.

      Step 3 — Configuring Kibana

      In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.

      First you’ll enable Kibana’s xpack security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.

      Enabling xpack.security in Kibana

      To get started with xpack security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.

      You can generate the required encryption keys using the kibana-encryption-keys utility that is included in the /usr/share/kibana/bin directory. Run the following to cd to the directory and then generate the keys:

      • cd /usr/share/kibana/bin/
      • sudo ./kibana-encryption-keys generate -q --force

      The -q flag suppresses the tool’s instructions, and the --force flag will ensure that you create new keys. You should receive output like the following:

      Output

      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585 xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b

      Copy these three keys somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml configuration file.

      Open the file using vi or your preferred editor:

      • sudo vi /etc/kibana/kibana.yml

      Go to the end of the file using the vi shortcut SHIFT+G. Paste the three xpack lines that you copied to the end of the file:

      /etc/kibana/kibana.yml

      . . .
      
      # Specifies locale to be used for all localizable strings, dates and number formats.
      # Supported languages are the following: English - en , by default , Chinese - zh-CN .
      #i18n.locale: "en"
      
      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
      xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
      xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
      

      Keep the file open and proceed to the next section where you will configure Kibana’s network settings.

      Configuring Kibana Networking

      To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost" line in /etc/kibana/kibana.yml. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:

      /etc/kibana/kibana.yml

      # Kibana is served by a back end server. This setting specifies the port to use.
      #server.port: 5601
      
      # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
      # The default is 'localhost', which usually means remote machines will not be able to connect.
      # To allow connections from remote users, set this parameter to a non-loopback address.
      #server.host: "localhost"
      server.host: "your_private_ip"
      

      Substitute your private IP in place of the your_private_ip address.

      Save and close the file when you are done editing it. Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.

      Configuring Kibana Credentials

      There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.

      We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly.

      If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username and elasticsearch.password.

      If you choose to edit the configuration file, skip the rest of the steps in this section.

      To add a secret to the keystore using the kibana-keystore utility, first cd to the the /usr/share/kibana/bin directory. Next, run the following command to set the username for Kibana:

      • cd /usr/share/kibana/bin
      • sudo ./kibana-keystore add elasticsearch.username

      You will receive a prompt like the following:

      Username Entry

      Enter value for elasticsearch.username: *************
      

      Enter kibana_system when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an * asterisk character. Press ENTER or RETURN when you are done entering the username.

      Now repeat the process, this time to save the password. Be sure to copy the password for the kibana_system user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl.

      Run the following command to set the password:

      • sudo ./kibana-keystore add elasticsearch.password

      When prompted, paste the password to avoid any transcription errors:

      Password Entry

      Enter value for elasticsearch.password: ********************
      

      Starting Kibana

      Now that you have configured networking and the xpack security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.

      Run the following systemctl command to restart Kibana:

      • sudo systemctl start kibana.service

      Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.

      Step 4 — Installing Filebeat

      Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.

      To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:

      • sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

      Next, create an elasticsearch.repo file in your /etc/yum/yum.repos.d directory with the following contents, using vi or your preferred editor:

      • sudo vi /etc/yum.repos.d/elasticsearch.repo

      /etc/yum.repos.d/elasticsearch.repo

      [elasticsearch]
      name=Elasticsearch repository for 7.x packages
      baseurl=https://artifacts.elastic.co/packages/7.x/yum
      gpgcheck=1
      gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
      enabled=0
      autorefresh=1
      type=rpm-md
      

      When you are finished making changes save and exit the file. Now install the Filebeat package using the dnf command:

      • sudo dnf install --enablerepo=elasticsearch filebeat

      Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml configuration file using vi or your preferred editor:

      • sudo vi /etc/filebeat/filebeat.yml

      Find the Kibana section of the file around line 100. Add a line after the commented out #host: "localhost:5601" line that points to your Kibana instance’s private IP address and port:

      /etc/filebeat/filebeat.yml

      . . .
      # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
      # This requires a Kibana endpoint configuration.
      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        #host: "localhost:5601"
        host: "your_private_ip:5601"
      
      . . .
      

      This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.

      Next, find the Elasticsearch Output section of the file around line 130 and edit the hosts, username, and password settings to match the values for your Elasticsearch server:

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["your_private_ip:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        username: "elastic"
        password: "6kNbsxQGYZ2EQJiqJpgl"
      
      . . .
      

      Substitute in your Elasticsearch server’s private IP address on the hosts line. Uncomment the username field and leave it set to the elastic user. Change the password field from changeme to the password for the elastic user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.

      Save and close the file when you are done editing it. Next, enable Filebeats’ built-in Suricata module with the following command:

      • sudo filebeat modules enable suricata

      Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.

      Run the filebeat setup command. It may take a few minutes to load everything:

      Once the command finishes you should receive output like the following:

      Output

      Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling. Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/machine-learning/current/index.html It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat. Loaded machine learning job configurations Loaded Ingest pipelines

      If there are no errors, use the systemctl command to start Filebeat. It will begin sending events from Suricata’s eve.json log to Elasticsearch once it is running.

      • sudo systemctl start filebeat.service

      Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.

      Step 5 — Navigating Kibana’s SIEM Dashboards

      Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.

      Connecting to Kibana with SSH

      SSH has an option -L that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.

      On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.

      Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:

      • ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N

      The various arguments to SSH are:

      • The -L flag forwards traffic to your local system on port 5601 to the remote server.
      • The your_private_ip:5601 portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip.
      • The 203.0.113.5 address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.
      • The -N flag instructs SSH to not run a command like an interactive /bin/bash shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.

      If you would like to close the tunnel at any time, press CTRL+C.

      On Windows your terminal should resemble the following screenshot:

      Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER or RETURN.

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      On macOS and Linux your terminal will be similar to the following screenshot:

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:

      Screenshot of a Browser on Kibana's Login Page

      If your browser cannot connect to Kibana you will receive a message like the following in your terminal:

      Output

      channel 3: open failed: connect failed: No route to host

      This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.

      Log in to your Kibana server using elastic for the Username, and the password that you copied earlier in this tutorial for the user.

      Browsing Kibana SIEM Dashboards

      Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.

      In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:

      Screenshot of a Browser Using Kibana's Global Search Box to Locate Suricata Dashboards

      Click the [Filebeat Suricata] Events Overview result to visit the Kibana dashboard that shows an overview of all logged Suricata events:

      Screenshot of a Browser on Kibana's Suricata Events Dashboard

      To visit the Suricata Alerts dashboard, repeat the search or click the Alerts link that is included in the Events dashboard. Your page should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Suricata Alerts Dashboard

      If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.

      Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Security -> Network Dashboard

      You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.

      Conclusion

      In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack security module that is included with each tool.

      After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.

      Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.

      The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.



      Source link