One place for hosting & domains

      Debian

      How to Use APT to Manage Packages in Debian and Ubuntu


      Advanced Package Tool, more commonly known as
      APT, is a package management system for Ubuntu, Debian, Kali Linux, and other Debian-based Linux distributions. It acts as a front-end to the lower-level
      dpkg package manager, which is used for installing, managing, and providing information on .deb packages. In addition to these functions, APT interfaces with repositories to obtain packages and also provides very efficient dependency management.

      Most distributions that use APT also include a collection of command-line tools that can be used to interface with APT. These tools include apt-get, apt-cache, and the newer apt, which essentially combines both of the previous tools with some modified functionality. Other package managers and tools also exist for interacting with APT or dpkg. A popular one is called
      Aptitude. Aptitude includes both a command-line interface as well as an interactive user interface. While it does offer advanced functionality, it is not commonly installed by default and is not covered in this guide.

      This guide aims to walk you through using APT and its command-line tools to perform common functions related to package management. The commands and examples used throughout this guide default to using the apt command. Many of the commands interchangeable with either apt-get or apt-cache, though there may be breaking differences.

      Before You Begin

      Before running the commands within this guide, you will need:

      1. A system running on Debian or Ubuntu. Other Linux distributions that employ the APT package manager can also be used. Review the
        Creating a Compute Instance guide if you do not yet have a compatible system.

      2. Login credentials to the system for either the root user (not recommended) or a standard user account (belonging to the sudo group) and the ability to access the system through
        SSH or
        Lish. Review the
        Setting Up and Securing a Compute Instance guide for assistance on creating and securing a standard user account.

      Note

      Some commands in this guide require elevated privileges and are prefixed with the sudo command. If you are logged in as the root use (not recommended), you can omit the sudo prefix if desired. If you’re not familiar with the sudo command, see the
      Linux Users and Groups guide.

      What’s the difference between apt and apt-get/apt-cache?

      While there are more similarities than differences, there are a few important points to consider when decided which command to use.

      • apt: A newer end-user tool that consolidates the functionality of both apt-get and apt-cache. Compared to the others, the apt tool is more straightforward and user-friendly. It also has some extra features, such as a status bar and the ability to list packages. Both Ubuntu and Debian recommend the apt command over apt-get and apt-cache. See
        apt Ubuntu man pages
      • apt-get and apt-cache: The apt-get command manages the installation, upgrades, and removal of packages (and their dependencies). The apt-cache command is used to search for packages and retrieve details about a package. Updates to these commands are designed to never introduce breaking changes, even at the expense of the user experience. The output works well for machine readability and these commands are best limited to use within scripts. See
        apt-get Ubuntu man pages and
        apt-cache Ubuntu man pages.

      In short, apt is a single tool that encompasses most of the functionality of other APT-specific tooling. It is designed primarily for interacting with APT as an end-user and its default functionality may change to include new features or best practices. If you prefer not to risk breaking compatibility and/or prefer to interact with plainer output, apt-get and apt-cache can be used instead, though the exact commands may vary.

      Installing Packages

      Installs the specified package and all required dependencies. Replace [package] with the name of the package you wish to install. The apt install command is interchangeable with apt-get install.

      sudo apt install [package]
      

      Before installing packages, it’s highly recommended to obtain updated package version and dependency information and upgrade packages and dependencies to those latest version. See
      Updating Package Information and
      Upgrading Packages for more details. These actions can be performed quickly by running the following sequence of commands:

      sudo apt update && sudo apt upgrade
      

      Additional options, commands, and notes:

      • Install a specific version by adding an equal sign after the package, followed by the version number you’d like to install.

        sudo apt install [package]=[version]
        
      • Reinstall a package and any dependencies by running the following command. This is useful if an installation for a package becomes corrupt or dependencies were somehow removed.

        sudo apt reinstall [package]
        

      Updating Package Information

      Downloads package information from all the sources/repositories configured on your system (within /etc/apt/sources.list). This command obtains details about the latest version for all available packages as well as their dependencies. It should be the first step before installing or upgrading packages on your system.

      sudo apt update
      

      This command is equivalent to apt-get update.

      Upgrading Packages

      Upgrades all packages to their latest versions, including upgrading existing dependencies and installing new ones. It’s important to note that the currently installed versions are not removed and will remain on your system.

      sudo apt upgrade
      

      This command is equivalent to apt-get upgrade --with-new-pkgs. Without the --with-new-pkgs option, the apt-get upgrade command only upgrades existing packages/dependencies and ignores any packages that require new dependencies to be installed.

      Before upgrading packages, it’s highly recommended to obtain updated package version and dependency information. See
      Updating Package Information for more details. These two actions can be performed together through the following sequence of commands:

      sudo apt update && sudo apt upgrade
      

      Additional options, commands, and notes:

      • To view a list of all available upgrades, use the list command with the --upgradable option.

        apt list --upgradeable
        
      • To upgrade a specific package, use the install command and append the package name. If the package is already installed, it will be upgraded to the latest version your system knows about. To only upgrade (not install) a package, use the --only-upgrade option. In the below command, replace [package] with the name of the package you wish to upgrade.

        sudo apt install --only-upgrade [package]
        
      • The apt full-upgrade command (equivalent to apt-get dist-upgrade) can remove packages as well as upgrade and install them. In most cases, it is not recommended to routinely run these commands. To remove unneeded packages (including kernels), use apt autoremove instead.

      Uninstalling Packages

      Removes the specified package from the system, but retains any packages that were installed to satisfy dependencies as well as some configuration files. Replace [package] with the name of the package you’d like to remove.

      sudo apt remove [package]
      

      To remove the package as well as any configuration files, run the following command. This can also be used to just remove configuration files for previously removed packages.

      sudo apt purge [package]
      

      Both of these commands are equivalent to apt-get remove and apt-get purge, respectively.

      • To remove any unused dependencies, run apt autoremove (apt-get autoremove). This is commonly done after uninstalling a package or after upgrading packages and can sometimes help in reducing disk space (and clutter).

        sudo apt autoremove
        

      Common Command Options

      The following options are available for most of the commands discussed on this guide.

      • Multiple packages can be taken action on together by delimiting them with a space. For example:

        sudo apt install [package1] [package2]
        
      • Automatically accept prompts by adding the -y or --yes option. This is useful when writing scripts to prevent any user interaction when its implicit that they wish to perform the action on the specified packages.

        sudo apt install [package] -y
        

      Listing Packages

      The apt list command lists all available, installed, or upgradeable packages. This can be incredibly useful for locating specific packages – especially when combined with grep or less. There is no direct equivalent command within apt-cache.

      • List all packages that are installed

        apt list --installed
        
      • List all packages that have an upgrade available

        apt list --upgradeable
        
      • List all versions of all available packages

        apt list --all-versions
        

      Additional options, commands, and notes:

      • Use
        grep to quickly search through the list for specific package names or other strings. Replace [string] with the package name or other term you wish to search for.

        apt list --installed | grep [string]
        
      • Use a content viewer like
        less to interact with the output, which may help you view or search for your desired information.

        apt list --installed | less
        

      Searching for Available Packages

      Searches through all available packages for the specified term or regex string.

      apt search [string]
      

      The command apt-cache search is similar, though the output for apt search is more user-friendly.

      Additional options, commands, and notes:

      • Use the --full option to see the full description/summary for each package.

        apt search --full [string]
        
      • To find packages whose titles or short/long descriptions contain multiple terms, delimit each string with a space.

        apt search [string1] [string2]
        

      Viewing Information About Packages

      Displays information about an installed or available package. The following command is similar to apt-cache show --no-all-versions [package].

      apt show [package]
      

      The information in the output includes:

      • Package: The name of the package.
      • Version: The version of the package.
      • Installed-Size: The amount of space this package consumes on the disk, not including any dependencies.
      • Depends: A list of dependencies.
      • APT-Manual-Installed: Designates if the package was manually installed or automatically installed (for instance, like as a dependency for another package). This is visible within apt (not apt-cache).
      • APT-Sources: The repository where the package information was stored. This is visible within apt (not apt-cache).
      • Description: A long description of the package.

      Adding Repositories

      A repository is a collection of packages (typically for a specific Linux distribution and version) that are stored on a remote system. This enables software distributors to store a package (including new versions) in one place and enable users to quickly install that package onto their system. In most cases, we obtain packages from a repository – as opposed to manually downloading package files.

      Information about repositories that are configured on your system are stored within /etc/apt/sources.list or the directory /etc/apt/sources.list.d/. Repositories can be added manually by editing (or adding) a sources.list configuration file, though most repositories also require adding the GPG public key to APT’s keyring. To automate this process, it’s recommended to use the
      add-apt-repository utility.

      sudo add-apt-repository [repository]
      

      Replace [repository] with the url to the repository or, in the case of a PPA (Personal Package Archive), the reference to that PPA.

      Once a repository has been added, you can update your package list and install the package. See
      Updating Package Information and
      Installing Packages.

      Cloning Packages to Another System

      If you wish to replicate the currently installed packages to another system without actually copying over any other data, consider using the
      apt-clone utility. This software is compatible with Debian-based systems and is available through Ubuntu’s official repository.

      1. Install apt-clone.

        sudo apt install apt-clone
        
      2. Create a backup containing a list of all installed packages, replacing [name] with the name of the backup (such as my-preferred-packages)

        apt-clone clone [name]
        

        This command creates a new file using the name provided in the last step and appending .apt-clone.tar.gz.

      3. Copy the file to your new system. See the
        Download Files from Your Linode guide or the
        File Transfer section for more information.

      4. Install apt-clone on the new system (see Step 1).

      5. Using apt-clone, run the following command to restore the packages. Replace [name] with the name used in the previous step (or whatever the file is called). If the file is located within a different directly than your current directory, adjust the command to include the path.

        sudo apt-clone restore [name].apt-clone.tar.gz



      Source link

      How To Build A Security Information and Event Management (SIEM) System with Suricata and the Elastic Stack on Debian 11


      Not using Debian 11?


      Choose a different version or distribution.

      Introduction

      The previous tutorials in this series guided you through installing, configuring, and running Suricata as an Intrusion Detection (IDS) and Intrusion Prevention (IPS) system. You also learned about Suricata rules and how to create your own.

      In this tutorial you will explore how to integrate Suricata with Elasticsearch, Kibana, and Filebeat to begin creating your own Security Information and Event Management (SIEM) tool using the Elastic stack and Debian 11. SIEM tools are used to collect, aggregate, store, and analyze event data to search for security threats and suspicious activity on your networks and servers.

      The components that you will use to build your own SIEM tool are:

      • Elasticsearch to store, index, correlate, and search the security events that come from your Suricata server.
      • Kibana to display and navigate around the security event logs that are stored in Elasticsearch.
      • Filebeat to parse Suricata’s eve.json log file and send each event to Elasticsearch for processing.
      • Suricata to scan your network traffic for suspicious events, and either log or drop invalid packets.

      First you’ll install and configure Elasticsearch and Kibana with some specific authentication settings. Then you’ll add Filebeat to your Suricata system to send its eve.json logs to Elasticsearch.

      Finally, you’ll learn how to connect to Kibana using SSH and your web browser, and then load and interact with Kibana dashboards that show Suricata’s events and alerts.

      Prerequisites

      If you have been following this tutorial series then you should already have Suricata running on an Debian 11 server. This server will be referred to as your Suricata server.

      You will also need a second server to host Elasticsearch and Kibana. This server will be referred to as your Elasticsearch server. It should be a Debian 11 server with:

      For the purposes of this tutorial, both servers should be able to communicate using private IP addresses. You can use a VPN like WireGuard to connect your servers, or use a cloud-provider that has private networking between hosts. You can also choose to run Elasticsearch, Kibana, Filebeat, and Suricata on the same server for experimenting.

      Step 1 — Installing Elasticsearch and Kibana

      The first step in this tutorial is to install Elasticsearch and Kibana on your Elasticsearch server. To get started, add the Elastic GPG key to your server with the following command:

      • curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

      Next, add the Elastic source list to the sources.list.d directory, where apt will search for new sources:

      • echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

      Now update your server’s package index and install Elasticsearch and Kibana:

      • sudo apt update
      • sudo apt install elasticsearch kibana

      Once you are done installing the packages, find and record your server’s private IP address using the ip address show command:

      You will receive output like the following:

      Output

      lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 159.89.122.115/20 10.20.0.8/16 2604:a880:cad:d0::e56:8001/64 fe80::b832:69ff:fe46:7e5d/64 eth1 UP 10.137.0.5/16 fe80::b883:5bff:fe19:43f3/64

      The private network interface in this output is the highlighted eth1 device, with the IPv4 address 10.137.0.5/16. Your device name, and IP addresses will be different. However, the address will be from the following reserved blocks of addresses:

      • 10.0.0.0 to 10.255.255.255 (10/8 prefix)
      • 172.16.0.0 to 172.31.255.255 (172.16/12 prefix)
      • 192.168.0.0 to 192.168.255.255 (192.168/16 prefix)

      If you would like to learn more about how these blocks are allocated visit the RFC 1918 specification)

      Record the private IP address for your Elasticsearch server (in this case 10.137.0.5). This address will be referred to as your_private_ip in the remainder of this tutorial. Also note the name of the network interface, in this case eth1. In the next part of this tutorial you will configure Elasticsearch and Kibana to listen for connections on the private IP address coming from your Suricata server.

      Step 2 — Configuring Elasticsearch

      Elasticsearch is configured to only accept local connections by default. Additionally, it does not have any authentication enabled, so tools like Filebeat will not be able to send logs to it. In this section of the tutorial you will configure the network settings for Elasticsearch and then enable Elasticsearch’s built-in xpack security module.

      Configuring Elasticsearch Networking

      Since Your Elasticsearch and Suricata servers are separate, you will need to configure Elasticsearch to listen for connections on its private network interface. You will also need to configure your firewall rules to allow access to Elasticsearch on your private network interface.

      Open the /etc/elasticsearch/elasticsearch.yml file using nano or your preferred editor:

      • sudo nano /etc/elasticsearch/elasticsearch.yml

      Find the commented out #network.host: 192.168.0.1 line between lines 50–60 and add a new line after it that configures the network.bind_host setting, as highlighted below:

      /etc/elasticsearch/elasticsearch.yml

      # By default Elasticsearch is only accessible on localhost. Set a different
      # address here to expose this node on the network:
      #
      #network.host: 192.168.0.1
      network.bind_host: ["127.0.0.1", "your_private_ip"]
      #
      # By default Elasticsearch listens for HTTP traffic on the first free port it
      # finds starting at 9200. Set a specific HTTP port here:
      

      Substitute your private IP in place of the your_private_ip address. This line will ensure that Elasticsearch is still available on its local address so that Kibana can reach it, as well as on the private IP address for your server.

      Next, go to the end of the file using the nano shortcut CTRL+v until you reach the end.

      Add the following highlighted lines to the end of the file:

      /etc/elasticsearch/elasticsearch.yml

      . . .
      discovery.type: single-node
      xpack.security.enabled: true
      

      The discovery.type setting allows Elasticsearch to run as a single node, as opposed to in a cluster of other Elasticsearch servers. The xpack.security.enabled setting turns on some of the security features that are included with Elasticsearch.

      Save and close the file when you are done editing it. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm.

      Finally, add firewall rules to ensure your Elasticsearch server is reachable on its private network interface. If you followed the prerequisite tutorials and are using the Uncomplicated Firewall (ufw), run the following commands:

      • sudo ufw allow in on eth1
      • sudo ufw allow out on eth1

      Substitute your private network interface in place of eth1 if it uses a different name.

      Next you will start the Elasticsearch daemon and then configure passwords for use with the xpack security module.

      Starting Elasticsearch

      Now that you have configured networking and the xpack security settings for Elasticsearch, you need to start it for the changes to take effect.

      Run the following systemctl command to start Elasticsearch:

      • sudo systemctl start elasticsearch.service

      Once Elasticsearch finishes starting, you can continue to the next section of this tutorial where you will generate passwords for the default users that are built-in to Elasticsearch.

      Configuring Elasticsearch Passwords

      Now that you have enabled the xpack.security.enabled setting, you need to generate passwords for the default Elasticsearch users. Elasticsearch includes a utility in the /usr/share/elasticsearch/bin directory that can automatically generate random passwords for these users.

      Run the following command to cd to the directory and then generate random passwords for all the default users:

      • cd /usr/share/elasticsearch/bin
      • sudo ./elasticsearch-setup-passwords auto

      You will receive output like the following. When prompted to continue, press y and then RETURN or ENTER:

      Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
      The passwords will be randomly generated and printed to the console.
      Please confirm that you would like to continue [y/N]y
      
      
      Changed password for user apm_system
      PASSWORD apm_system = eWqzd0asAmxZ0gcJpOvn
      
      Changed password for user kibana_system
      PASSWORD kibana_system = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user kibana
      PASSWORD kibana = 1HLVxfqZMd7aFQS6Uabl
      
      Changed password for user logstash_system
      PASSWORD logstash_system = wUjY59H91WGvGaN8uFLc
      
      Changed password for user beats_system
      PASSWORD beats_system = 2p81hIdAzWKknhzA992m
      
      Changed password for user remote_monitoring_user
      PASSWORD remote_monitoring_user = 85HF85Fl6cPslJlA8wPG
      
      Changed password for user elastic
      PASSWORD elastic = 6kNbsxQGYZ2EQJiqJpgl
      

      You will not be able to run the utility again, so make sure to record these passwords somewhere secure. You will need to use the kibana_system user’s password in the next section of this tutorial, and the elastic user’s password in the Configuring Filebeat step of this tutorial.

      At this point in the tutorial you are finished configuring Elasticsearch. The next section explains how to configure Kibana’s network settings and its xpack security module.

      Step 3 — Configuring Kibana

      In the previous section of this tutorial, you configured Elasticsearch to listen for connections on your Elasticsearch server’s private IP address. You will need to do the same for Kibana so that Filebeats on your Suricata server can reach it.

      First you’ll enable Kibana’s xpack security functionality by generating some secrets that Kibana will use to store data in Elasticsearch. Then you’ll configure Kibana’s network setting and authentication details to connect to Elasticsearch.

      Enabling xpack.security in Kibana

      To get started with xpack security settings in Kibana, you need to generate some encryption keys. Kibana uses these keys to store session data (like cookies), as well as various saved dashboards and views of data in Elasticsearch.

      You can generate the required encryption keys using the kibana-encryption-keys utility that is included in the /usr/share/kibana/bin directory. Run the following to cd to the directory and then generate the keys:

      • cd /usr/share/kibana/bin/
      • sudo ./kibana-encryption-keys generate -q

      The -q flag suppresses the tool’s instructions so that you only receive output like the following:

      Output

      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585 xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b

      Copy your output somewhere secure. You will now add them to Kibana’s /etc/kibana/kibana.yml configuration file.

      Open the file using nano or your preferred editor:

      • sudo nano /etc/kibana/kibana.yml

      Go to the end of the file using the nano shortcut CTRL+v until you reach the end. Paste the three xpack lines that you copied to the end of the file:

      /etc/kibana/kibana.yml

      . . .
      
      # Specifies locale to be used for all localizable strings, dates and number formats.
      # Supported languages are the following: English - en , by default , Chinese - zh-CN .
      #i18n.locale: "en"
      
      xpack.encryptedSavedObjects.encryptionKey: 66fbd85ceb3cba51c0e939fb2526f585
      xpack.reporting.encryptionKey: 9358f4bc7189ae0ade1b8deeec7f38ef
      xpack.security.encryptionKey: 8f847a594e4a813c4187fa93c884e92b
      

      Keep the file open and proceed to the next section where you will configure Kibana’s network settings.

      Configuring Kibana Networking

      To configure Kibana’s networking so that it is available on your Elasticsearch server’s private IP address, find the commented out #server.host: "localhost" line in /etc/kibana/kibana.yml. The line is near the beginning of the file. Add a new line after it with your server’s private IP address, as highlighted below:

      /etc/kibana/kibana.yml

      # Kibana is served by a back end server. This setting specifies the port to use.
      #server.port: 5601
      
      # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
      # The default is 'localhost', which usually means remote machines will not be able to connect.
      # To allow connections from remote users, set this parameter to a non-loopback address.
      #server.host: "localhost"
      server.host: "your_private_ip"
      

      Substitute your private IP in place of the your_private_ip address.

      Save and close the file when you are done editing it. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm.

      Next, you’ll need to configure the username and password that Kibana uses to connect to Elasticsearch.

      Configuring Kibana Credentials

      There are two ways to set the username and password that Kibana uses to authenticate to Elasticsearch. The first is to edit the /etc/kibana/kibana.yml configuration file and add the values there. The second method is to store the values in Kibana’s keystore, which is an obfuscated file that Kibana can use to store secrets.

      We’ll use the keystore method in this tutorial since it avoids editing Kibana’s configuration file directly

      If you prefer to edit the file instead, the settings to configure in it are elasticsearch.username and elasticsearch.password.

      If you choose to edit the configuration file, skip the rest of the steps in this section.

      To add a secret to the keystore using the kibana-keystore utility, first cd to the /usr/share/kibana/bin directory. Next, run the following command to set the username for Kibana:

      • sudo ./kibana-keystore add elasticsearch.username

      You will receive a prompt like the following:

      Username Entry

      Enter value for elasticsearch.username: *************
      

      Enter kibana_system when prompted, either by copying and pasting, or typing the username carefully. Each character that you type will be masked with an * asterisk character. Press ENTER or RETURN when you are done entering the username.

      Now repeat the same command for the password. Be sure to copy the password for the kibana_system user that you generated in the previous section of this tutorial. For reference, in this tutorial the example password is 1HLVxfqZMd7aFQS6Uabl.

      Run the following command to set the password:

      • sudo ./kibana-keystore add elasticsearch.password

      When prompted, paste the password to avoid any transcription errors:

      Password Entry

      Enter value for elasticsearch.password: ********************
      

      Starting Kibana

      Now that you have configured networking and the xpack security settings for Kibana, as well as added credentials to the keystore, you need to start it for the changes to take effect.

      Run the following systemctl command to restart Kibana:

      • sudo systemctl start kibana.service

      Once Kibana starts, you can continue to the next section of this tutorial where you will configure Filebeat on your Suricata server to send its logs to Elasticsearch.

      Step 4 — Installing Filebeat

      Now that your Elasticsearch and Kibana processes are configured with the correct network and authentication settings, the next step is to install and set up Filebeat on your Suricata server.

      To get started installing Filebeat, add the Elastic GPG key to your Suricata server with the following command:

      • curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

      Next, add the Elastic source list to the sources.list.d directory, where apt will search for new sources:

      • echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

      Now update the server’s package index and install the Filebeat package:

      • sudo apt update
      • sudo apt install filebeat

      Next you’ll need to configure Filebeat to connect to both Elasticsearch and Kibana. Open the /etc/filebeat/filebeat.yml configuration file using nano or your preferred editor:

      • sudo nano /etc/filebeat/filebeat.yml

      Find the Kibana section of the file around line 100. Add a line after the commented out #host: "localhost:5601" line that points to your Kibana instance’s private IP address and port:

      /etc/filebeat/filebeat.yml

      . . .
      # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
      # This requires a Kibana endpoint configuration.
      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        #host: "localhost:5601"
        host: "your_private_ip:5601"
      
      . . .
      

      This change will ensure that Filebeat can connect to Kibana in order to create the various SIEM indices, dashboards, and processing pipelines in Elasticsearch to handle your Suricata logs.

      Next, find the Elasticsearch Output section of the file around line 130 and edit the hosts, username, and password settings to match the values for your Elasticsearch server:

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["your_private_ip:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        username: "elastic"
        password: "6kNbsxQGYZ2EQJiqJpgl"
      
      . . .
      

      Substitute in your Elasticsearch server’s private IP address on the hosts line in place of the your_private_ip value. Uncomment the username field and leave it set to the elastic user. Change the password field from changeme to the password for the elastic user that you generated in the Configuring Elasticsearch Passwords section of this tutorial.

      Save and close the file when you are done editing it. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm.

      Next, enable Filebeats’ built-in Suricata module with the following command:

      • sudo filebeat modules enable suricata

      Now that Filebeat is configured to connect to Elasticsearch and Kibana, with the Suricata module enabled, the next step is to load the SIEM dashboards and pipelines into Elasticsearch.

      Run the filebeat setup command. It may take a few minutes to load everything:

      Once the command finishes you should receive output like the following:

      Output

      Overwriting ILM policy is disabled. Set `setup.ilm.overwrite: true` for enabling. Index setup finished. Loading dashboards (Kibana must be running and reachable) Loaded dashboards Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead. See more: https://www.elastic.co/guide/en/machine-learning/current/index.html It is not possible to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat. Loaded machine learning job configurations Loaded Ingest pipelines

      If there are no errors, use the systemctl command to start Filebeat. It will begin sending events from Suricata’s eve.json log to Elasticsearch once it is running.

      • sudo systemctl start filebeat.service

      Now that you have Filebeat, Kibana, and Elasticsearch configured to process your Suricata logs, the last step in this tutorial is to connect to Kibana and explore the SIEM dashboards.

      Step 5 — Navigating Kibana’s SIEM Dashboards

      Kibana is the graphical component of the Elastic stack. You will use Kibana with your browser to explore Suricata’s event and alert data. Since you configured Kibana to only be available via your Elasticsearch server’s private IP address, you will need to use an SSH tunnel to connect to Kibana.

      Connecting to Kibana with SSH

      SSH has an option -L that lets you forward network traffic on a local port over its connection to a remote IP address and port on a server. You will use this option to forward traffic from your browser to your Kibana instance.

      On Linux, macOS, and updated versions of Windows 10 and higher, you can use the built-in SSH client to create the tunnel. You will use this command each time you want to connect to Kibana. You can close this connection at any time and then run the SSH command again to re-establish the tunnel.

      Run the following command in a terminal on your local desktop or laptop computer to create the SSH tunnel to Kibana:

      • ssh -L 5601:your_private_ip:5601 sammy@203.0.113.5 -N

      The various arguments to SSH are:

      • The -L flag forwards traffic to your local system on port 5601 to the remote server.
      • The your_private_ip:5601 portion of the command specifies the service on your Elasticsearch server where your traffic will be fowarded to. In this case that service is Kibana. Be sure to substitute your Elasticsearch server’s private IP address in place of your_private_ip
      • The 203.11.0.5 address is the public IP address that you use to connect to and administer your server. Substitute your Elasticsearch server’s public IP address in its place.
      • The -N flag instructs SSH to not run a command like an interactive /bin/bash shell, and instead just hold the connection open. It is generally used when forwarding ports like in this example.

      If you would like to close the tunnel at any time, press CTRL+C.

      On Windows your terminal should resemble the following screenshot:

      Note: You may be prompted to enter a password if you are not using an SSH key. Type or paste it into the prompt and press ENTER or RETURN.

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      On macOS and Linux your terminal will be similar to the following screenshot:

      Screenshot of Windows Command Prompt Showing SSH Command to Port Forward to Kibana

      Once you have connected to your Elasticsearch server over SSH with the port forward in place, open your browser and visit http://127.0.0.1:5601. You will be redirected to Kibana’s login page:

      Screenshot of a Browser on Kibana's Login Page

      If your browser cannot connect to Kibana you will receive a message like the following in your terminal:

      Output

      channel 3: open failed: connect failed: No route to host

      This error indicates that your SSH tunnel is unable to reach the Kibana service on your server. Ensure that you have specified the correct private IP address for your Elasticsearch server and reload the page in your browser.

      Log in to your Kibana server using elastic for the Username, and the password that you copied earlier in this tutorial for the user.

      Browsing Kibana SIEM Dashboards

      Once you are logged into Kibana you can explore the Suricata dashboards that Filebeat configured for you.

      In the search field at the top of the Kibana Welcome page, input the search terms type:dashboard suricata. This search will return two results: the Suricata Events and Suricata Alerts dashboards per the following screenshot:

      Screenshot of a Browser Using Kibana's Global Search Box to Locate Suricata Dashboards

      Click the [Filebeat Suricata] Events Overview result to visit the Kibana dashboard that shows an overview of all logged Suricata events:

      Screenshot of a Browser on Kibana's Suricata Events Dashboard

      To visit the Suricata Alerts dashboard, repeat the search or click the Alerts link that is included in the Events dashboard. Your page should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Suricata Alerts Dashboard

      If you would like to inspect the events and alerts that each dashboard displays, scroll to the bottom of the page where you will find a table that lists each event and alert. You can expand each entry to view the original log entry from Suricata, and examine in detail the various fields like source and destination IPs for an alert, the attack type, Suricata signature ID, and others.

      Kibana also has a built-in set of Security dashboards that you can access using the menu on the left side of the browser window. Navigate to the Network dashboard for an overview of events displayed on a map, as well as aggregate data about events on your network. Your dashboard should resemble the following screenshot:

      Screenshot of a Browser on Kibana's Security -> Network Dashboard

      You can scroll to the bottom of the Network dashboard for a table that lists all of the events that match your specified search timeframe. You can also examine each event in detail, or select an event to generate a Kibana timeline, that you can then use to investigate specific traffic flows, alerts, or community IDs.

      Conclusion

      In this tutorial you installed and configured Elasticsearch and Kibana on a standalone server. You configured both tools to be available on a private IP address. You also configured Elasticsearch and Kibana’s authentication settings using the xpack security module that is included with each tool.

      After completing the Elasticsearch and Kibana configuration steps, you also installed and configured Filebeat on your Suricata server. You used Filebeat to populate Kibana’s dashboards and start sending Suricata logs to Elasticsearch.

      Finally, you created an SSH tunnel to your Elasticsearch server and logged into Kibana. You located the new Suricata Events and Alerts dashboards, as well as the Network dashboard.

      The last tutorial in this series will guide you through using Kibana’s SIEM functionality to process your Suricata alerts. In it you will explore how to create cases to track specific alerts, timelines to correlate network flows, and rules to match specific Suricata events that you would like to track or analyze in more detail.



      Source link

      How To Configure Suricata as an Intrusion Prevention System (IPS) on Debian 11


      Not using Debian 11?


      Choose a different version or distribution.

      Introduction

      In this tutorial you will learn how to configure Suricata’s built-in Intrusion Prevention System (IPS) mode on Debian 11. By default Suricata is configured to run as an Intrusion Detection System (IDS), which only generates alerts and logs suspicious traffic. When you enable IPS mode, Suricata can actively drop suspicious network traffic in addition to generating alerts for further analysis.

      Before enabling IPS mode, it is important to check which signatures you have enabled, and their default actions. An incorrectly configured signature, or a signature that is overly broad may result in dropping legitimate traffic to your network, or even block you from accessing your servers over SSH and other management protocols.

      In the first part of this tutorial you will check the signatures that you have installed and enabled. You will also learn how to include your own signatures. Once you know which signatures you would like to use in IPS mode, you’ll convert their default action to drop or reject traffic. With your signatures in place, you’ll learn how to send network traffic through Suricata using the netfilter NFQUEUE iptables target, and then generate some invalid network traffic to ensure that Suricata drops it as expected.

      Prerequisites

      If you have been following this tutorial series then you should already have Suricata running on a server. If you still need to install Suricata then you can follow one of these tutorials depending on your server’s operating system:

      • How To Install Suricata on Debian 11

      • You should also have the ET Open Ruleset downloaded using the suricata-update command, and included in your Suricata signatures.

      • The jq command line JSON processing tool. If you do not have it installed from a previous tutorial, you can do so using the apt command:

        • sudo apt update
        • sudo apt install jq

      You may also have custom signatures that you would like to use from the previous Understanding Suricata Signatures tutorial.

      Step 1 — Including Custom Signatures

      The previous tutorials in this series explored how to install and configure Suricata, as well as how to understand signatures. If you would like to create and include your own signatures then you need to edit Suricata’s /etc/suricata/suricata.yaml file to add them.

      First, let’s find your server’s public IPs so that you can use them in your custom signatures. To find your IPs you can use the ip command:

      You should receive output like the following:

      Output

      lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 203.0.113.0.5/20 10.20.0.5/16 2604:a880:cad:d0::dc8:4001/64 fe80::94ad:d4ff:fef9:cee0/64 eth1 UP 10.137.0.2/16 fe80::44a2:ebff:fe91:5187/64

      Your public IP address(es) will be similar to the highlighted 203.0.113.0.5 and 2604:a880:cad:d0::dc8:4001/64 IPs in the output.

      Now let’s create the following custom signature to scan for SSH traffic to non-SSH ports and include it in a file called /etc/suricata/rules/local.rules. Open the file with nano or your preferred editor:

      • sudo nano /etc/suricata/rules/local.rules

      Copy and paste the following signature:

      Invalid SSH Traffic Signature

      alert ssh any any -> 203.0.113.0.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000000;)
      alert ssh any any -> 2604:a880:cad:d0::dc8:4001/64 !22 (msg:"SSH TRAFFIC on non-SSH port"; flow:to_client, not_established; classtype: misc-attack; target: dest_ip; sid:1000001;)
      

      Substitute in your server’s public IP address in place of the 203.0.113.5 and 2604:a880:cad:d0::dc8:4001/64 addresses in the rule. If you are not using IPv6 then you can skip adding that signature in this and the following rules.

      You can continue adding custom signatures to this local.rules file depending on your network and applications. For example, if you wanted to alert about HTTP traffic to non-standard ports, you could use the following signatures:

      HTTP traffic on non-standard port signature

      alert http any any -> 203.0.113.0.5 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000002;)
      alert http any any -> 2604:a880:cad:d0::dc8:4001/64 !80 (msg:"HTTP REQUEST on non-HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000003;)
      

      To add a signature that checks for TLS traffic to ports other than the default 443 for web servers, add the following:

      TLS traffic on non-standard port signature

      alert tls any any -> 203.0.113.0.5 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000004;)
      alert tls any any -> 2604:a880:cad:d0::dc8:4001/64 !443 (msg:"TLS TRAFFIC on non-TLS HTTP port"; flow:to_client, not_established; classtype:misc-activity; sid:1000005;)
      

      When you are done adding signatures, save and close the file. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm. If you are using vi, press ESC and then :x then ENTER to save and exit.

      Now that you have some custom signatures defined, edit Suricata’s /etc/suricata/suricata.yaml configuration file using nano or your preferred editor to include them:

      • sudo nano /etc/suricata/suricata.yaml

      Find the rule-files: portion of the configuration. If you are using nano use CTRL+_ and then enter the line number 1879. If you are using vi enter 1879gg to go to the line.

      Edit the section and add the following highlighted - local.rules line:

      /etc/suricata/suricata.yaml

      . . .
      rule-files:
        - suricata.rules
        - local.rules
      . . .
      

      Save and exit the file. Be sure to validate Suricata’s configuration after adding your rules. To do so run the following command:

      • sudo suricata -T -c /etc/suricata/suricata.yaml -v

      The test can take some time depending on how many rules you have loaded in the default suricata.rules file. If you find the test takes too long, you can comment out the - suricata.rules line in the configuration by adding a # to the beginning of the line and then run your configuration test again.

      Once you are satisfied with the signatures that you have created or included using the suricata-update tool, you can proceed to the next step, where you’ll switch the default action for your signatures from alert or log to actively dropping traffic.

      Step 2 — Configuring Signature Actions

      Now that you have your custom signatures tested and working with Suricata, you can change the action to drop or reject. When Suricata is operating in IPS mode, these actions will actively block invalid traffic for any matching signature.

      These two actions are described in the previous tutorial in this series, Understanding Suricata Signatures. The choice of which action to use is up to you. A drop action will immediately discard a packet and any subsequent packets that belong to the network flow. A reject action will send both the client and server a reset packet if the traffic is TCP-based, and an ICMP error packet for any other protocol.

      Let’s use the custom rules from the previous section and convert them to use the drop action, since the traffic that they match is likely to be a network scan, or some other invalid connection.

      Open your /etc/suricata/rules/local.rules file using nano or your preferred editor and change the alert action at the beginning of each line in the file to drop:

      /etc/suricata/rules/local.rules

      drop ssh any any -> 203.0.113.0.5 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000000;)
      drop ssh any any -> 2604:a880:cad:d0::dc8:4001/64 !22 (msg:"SSH TRAFFIC on non-SSH port"; classtype: misc-attack; target: dest_ip; sid:1000001;)
      . . .
      

      Repeat the step above for any signatures in /etc/suricata/rules/suricata.rules that you would like to convert to drop or reject mode.

      Note: If you ran suricata-update in the prerequisite tutorial, you may have more than 30,000 signatures included in your suricata.rules file.

      If you convert every signature to drop or reject you risk blocking legitimate access to your network or servers. Instead, leave the rules in suricata.rules for the time being, and add your custom signatures to local.rules. Suricata will continue to generate alerts for suspicious traffic that is described by the signatures in suricata.rules while it is running in IPS mode.

      After you have a few days or weeks of alerts collected, you can analyze them and choose the relevant signatures to convert to drop or reject based on their sid.

      Once you have all the signatures configured with the action that you would like them to take, the next step is to reconfigure and then restart Suricata in IPS mode.

      Step 3 — Enabling nfqueue Mode

      Suricata runs in IDS mode by default, which means it will not actively block network traffic. To switch to IPS mode, you’ll need to modify Suricata’s default settings.

      Use the systemctl edit command to create a new systemd override file:

      • sudo systemctl edit suricata.service

      Add the following highlighted lines at the start of the file, in between the comments:

      systemctl edit suricata.service

      ### Editing /etc/systemd/system/suricata.service.d/override.conf
      ### Anything between here and the comment below will become the new contents of the file
      
      [Service]
      ExecStart=
      ExecStart=/usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /run/suricata.pid -q 0 -vvv
      Type=simple
      
      ### Lines below this comment will be discarded
      . . .
      
      • The ExecStart= line clears the default systemd command that starts a service. The next line defines the new ExecStart command to use.
      • The Type=simple line ensures that systemd can manage the Suricata process when it is running in IPS mode.

      Save and close the file. If you are using nano, you can do so with CTRL+X, then Y and ENTER to confirm. If you are using vi, press ESC and then :x then ENTER to save and exit.

      Now reload systemd so that it detects the new Suricata settings:

      • sudo systemctl daemon-reload

      Now you can restart Suricata using systemctl:

      • sudo systemctl restart suricata.service

      Check Suricata’s status using systemctl:

      • sudo systemctl status suricata.service

      You should receive output like the following:

      Output

      ● suricata.service - Suricata IDS/IDP daemon Loaded: loaded (/lib/systemd/system/suricata.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/suricata.service.d └─override.conf Active: active (running) since Wed 2021-12-15 14:35:21 UTC; 38s ago Docs: man:suricata(8) man:suricatasc(8) https://suricata-ids.org/docs/ Main PID: 29890 (Suricata-Main) Tasks: 10 (limit: 2340) Memory: 54.9M CPU: 3.957s CGroup: /system.slice/suricata.service └─29890 /usr/bin/suricata -c /etc/suricata/suricata.yaml --pidfile /run/suricata.pid -q 0 -vvv . . . Dec 15 14:35:21 suricata suricata[29890]: 15/12/2021 -- 14:35:21 - <Notice> - all 4 packet processing threads, 4 management threads initialized, engine started

      Note the highlighted active (running) line that indicates Suricata restarted successfully.

      With this change you are now ready to send traffic to Suricata using the UFW firewall in the next step.

      Step 4 — Configuring UFW To Send Traffic to Suricata

      Now that you have configured Suricata to process traffic in IPS mode, the next step is to direct incoming packets to Suricata. If you followed the prerequisite tutorials for this series and are using an Ubuntu 20.04 system, you should have the Uncomplicated Firewall (UFW) installed and enabled by default.

      To add the required rules for Suricata to UFW, you will need to edit the firewall files in the /etc/ufw/before.rules and /etc/ufw/before6.rules directly.

      Open the first file using nano or your preferred editor:

      • sudo nano /etc/ufw/before.rules

      Near the beginning of the file, insert the following highlighted lines:

      /etc/ufw/before.rules

      . . .
      # Don't delete these required lines, otherwise there will be errors
      *filter
      :ufw-before-input - [0:0]
      :ufw-before-output - [0:0]
      :ufw-before-forward - [0:0]
      :ufw-not-local - [0:0]
      # End required lines
      
      ## Start Suricata NFQUEUE rules
      -I INPUT 1 -p tcp --dport 22 -j NFQUEUE --queue-bypass
      -I OUTPUT 1 -p tcp --sport 22 -j NFQUEUE --queue-bypass
      -I FORWARD -j NFQUEUE
      -I INPUT 2 -j NFQUEUE
      -I OUTPUT 2 -j NFQUEUE
      ## End Suricata NFQUEUE rules
      
      # allow all on loopback
      -A ufw-before-input -i lo -j ACCEPT
      -A ufw-before-output -o lo -j ACCEPT
      . . .
      

      Save and exit the file when you are done editing it. Now add the same lines to the same section in the /etc/ufw/before6.rules file.

      The first two INPUT and OUTPUT rules are used to bypass Suricata so that you can connect to your server using SSH, even when Suricata is not running. Without these rules, an incorrect or overly broad signature could block your SSH access. Additionally, if Suricata is stopped, all traffic will be sent to the NFQUEUE target and then dropped since Suricata is not running.

      The next FORWARD rule ensures that if your server is acting as a gateway for other systems, all that traffic will also go to Suricata for processing.

      The final two INPUT and OUTPUT rules send all remaining traffic that is not SSH traffic to Suricata for processing.

      Restart UFW to load the new rules:

      • sudo systemctl restart ufw.service

      Note: If you are using another firewall you will need to modify these rules to match the format your firewall expects.

      If you are using iptables, then you can insert these rules directly using the iptables and ip6tables commands. However, you will need to ensure that the rules are persistent across reboots with a tool like iptables-persistent.

      If you are using firewalld, then the following rules will direct traffic to Suricata:

      • sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
      • sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -j NFQUEUE
      • sudo firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 0 -p tcp --dport 22 -j NFQUEUE --queue-bypass
      • sudo firewall-cmd --permanent --direct --add-rule ipv6 filter INPUT 1 -j NFQUEUE
      • sudo firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -j NFQUEUE
      • sudo firewall-cmd --permanent --direct --add-rule ipv6 filter FORWARD 0 -j NFQUEUE
      • sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
      • sudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -j NFQUEUE
      • sudo firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 0 -p tcp --sport 22 -j NFQUEUE --queue-bypass
      • sudo firewall-cmd --permanent --direct --add-rule ipv6 filter OUTPUT 1 -j NFQUEUE
      • sudo firewall-cmd --reload

      At this point in the tutorial you have Suricata configured to run in IPS mode, and your network traffic is being sent to Suricata by default. You will be able to restart your server at any time and your Suricata and firewall rules will be persistent.

      The last step in this tutorial is to verify Suricata is dropping traffic correctly.

      Step 5 — Testing Invalid Traffic

      Now that you have Suricata and your firewall configured to process network traffic, you can test whether Suricata will drop packets that match your custom and other included signatures.

      Recall signature sid:2100498 from the previous tutorial, which is modified in this example to drop matching packets:

      sid:2100498

      drop ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at 2010_09_23, updated_at 2010_09_23;)
      

      Find and edit the rule in your /etc/suricata/rules/suricata.rules file to use the drop action if you have the signature included there. Otherwise, add the rule to your /etc/suricata/rules/local.rules file.

      Send Suricata the SIGUSR2 signal to get it to reload its signatures:

      • sudo kill -usr2 $(pidof suricata)

      Now test the rule using curl:

      • curl --max-time 5 http://testmynids.org/uid/index.html

      You should receive an error stating that the request timed out, which indicates Suricata blocked the HTTP response:

      Output

      curl: (28) Operation timed out after 5000 milliseconds with 0 out of 39 bytes received

      You can confirm that Suricata dropped the HTTP response using jq to examine the eve.log file:

      • jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json

      You should receive output like the following:

      Output

      { . . . "community_id": "1:Z+RcUB32putNzIZ38V/kEzZbWmQ=", "alert": { "action": "blocked", "gid": 1, "signature_id": 2100498, "rev": 7, "signature": "GPL ATTACK_RESPONSE id check returned root", "category": "Potentially Bad Traffic", "severity": 2, "metadata": { "created_at": [ "2010_09_23" ], "updated_at": [ "2010_09_23" ] } }, "http": { "hostname": "testmynids.org", "url": "/uid/index.html", "http_user_agent": "curl/7.68.0", "http_content_type": "text/html", "http_method": "GET", "protocol": "HTTP/1.1", "status": 200, "length": 39 }, . . .

      The highlighted "action": "blocked" line confirms that the signature matched, and Suricata dropped or rejected the test HTTP request.

      Conclusion

      In this tutorial you configured Suricata to block suspicious network traffic using its built-in IPS mode. You also added custom signatures to examine and block SSH, HTTP, and TLS traffic on non-standard ports. To tie everything together, you also added firewall rules that direct traffic through Suricata for processing.

      Now that you have Suricata installed and configured in IPS mode, and can write your own signatures that either alert on or drop suspicious traffic, you can continue monitoring your servers and networks, and refining your signatures.

      Once you are satisfied with your Suricata signatures and configuration, you can continue with the last tutorial in this series, which will guide you through sending logs from Suricata to a Security and Information Event Management (SIEM) system built using the Elastic Stack.



      Source link