One place for hosting & domains

      Install

      How To Install and Use the Visual Studio Code (VS Code) Command Line Interface


      Introduction

      Visual Studio Code is a free, open-source, and cross-platform text editor developed primarily by Microsoft. It uses web technologies such as JavaScript and CSS, which has helped facilitate a large ecosystem of community-created plugins to extend its functionality into many different programming languages and features.

      In this tutorial, you’ll install the Visual Studio Code command line interface and learn how to use it to open files and directories, compare changes between files, and install extensions.

      Prerequisites

      To complete this tutorial, you’ll need to have Visual Studio Code installed. Please refer to the official Setting up Visual Studio Code documentation to find out how to install Code for your platform.

      Installing the Visual Studio Code Command Line Interface

      You may need to install the Visual Studio Code command line interface before using it. To do so, first launch the normal Visual Studio Code graphical interface. If this is your first time opening the app, the default screen will have a icon bar along the left, and a default welcome tab:

      A screenshot of the default

      Visual Studio Code provides a built-in command to install its command line interface. Bring up Code’s Command Palette by typing Command+Shift+P on Mac, or Control+Shift+P on Windows and Linux:

      A screenshot of the Visual Studio Code interface with the Command Palette activated, waiting for input to be entered after its '>' prompt

      This will open a prompt near the top of your Code window. Type shell command into the prompt. It should autocomplete to the correct command which will read Shell Command: Install 'code' command in PATH:

      A screenshot of the Visual Studio Code interface, with the Command Palette activated and the "Install 'code' command in PATH" command highlighted

      Press ENTER to run the highlighted command. You may be prompted to enter your administrator credentials to finish the installation process.

      You now have the code command line command installed.

      Verify that the install was successful by running code with the --version flag:

      Output

      1.62.1 f4af3cbf5a99787542e2a30fe1fd37cd644cc31f x64

      If your output includes a version string, you’ve successfully installed the Visual Studio Code command line interface. The next few sections will show you a few ways to use it.

      Opening Files with the code Command

      Running the code command with one or more filenames will open those files in the Visual Studio Code GUI:

      This will open the file1 file in Code.

      This will open all markdown (.md) files in the current directory in Code.

      By default, the files will be opened in an existing Code window if one is available. Use the --new-window flag to force Visual Studio Code to open a new window for the specified files.

      Opening a Directory with the code Command

      Use the code command followed by one or more directory names to open the directories in a new Visual Studio Code window:

      • code directory1 directory2

      Code will open a new window for the directories. Use the --reuse-window flag to tell Code to reuse the existing frontmost window instead.

      Opening a .code-workspace Workspace File with the code Command

      Opening a workspace file with the code command works similar to opening directories:

      • code example.code-workspace

      This will open the example workspace in a new window, unless you reuse an existing window by adding the --reuse-window flag.

      Installing an Extension Using the code Command

      You can install Visual Studio Code extensions using the code command line tool as well. To do so, you’ll first need to know the extension’s unique identifier. To find this information, first navigate to the extension’s page on the Visual Studio Marketplace.

      For instance, here is the page for the Jupyter Notebook extension:

      https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter

      Notice the itemName parameter in the address. This parameter’s value, ms-toolsai.jupyter, is this extension’s unique identifier.

      You can also find this information on the Marketplace page itself, towards the bottom of the right-hand column in the More info section:

      A screenshot of the Jupyter extension's page on the Visual Studio Marketplace, highlighting the 'Unique Identifier ms-toosai.jupyter' unique id information in the page's right-hand column

      Once you have this unique id, you can use it with code --install-extension to install the extension:

      • code --install-extension ms-toolsai.jupyter

      Output

      Installing extension 'ms-toolsai.jupyter'... Extension 'ms-toolsai.jupyter' v2021.11.1001489384 was successfully installed.

      Use the same id with the --uninstall-extension flag to uninstall the extension.

      Showing the Differences Between Two Files Using the code Command

      To show a standard split-screen diff that will highlight the additions, deletions, and changes between two files, use the --diff flag:

      A screenshot of the Visual Studio Code diff interface, with two files side by side, and the second line highlighted, showing a few words have changed between the two versions

      Similar to opening files, this will reuse the frontmost window by default, if one exists. To force a new window to open, use the --new-window flag.

      Piping stdin Into Visual Studio Code Using the code Command

      An important feature of most command line shells is the ability to pipe (or send) the output of one command to the input of the next. In the following command line, notice the | pipe character connecting the ls ~ command to code -:

      This will execute the ls command on the ~ directory, which is a shortcut for the current user’s home directory. The output from ls will be a list of files and directories in your home directory. This will be sent to the code command, where the single - indicates that it should read the piped in text instead of a file.

      code will output some information about the temporary file that it has created to hold the input:

      Output

      Reading from stdin via: /var/folders/dw/ncv0fr3x0xg7tg0c_cvfynvh0000gn/T/code-stdin-jfa

      Then this file will open up in the Code GUI interface:

      A screenshot of Visual Studio Code with a text file open, displaying the text piped in from the ls command. The text is standard directories such as Desktop and Documents, along with file1 and file2 used in the previous section

      This command will continue to wait indefinitely for more input. Press CTRL+C to have code stop listening and return you to your shell.

      Add the --new-window flag to force Code to open a new window for the input.

      Conclusion

      In this tutorial you installed Visual Studio Code’s code command line tool, and used it to open files and directories, compare files, and install extensions.

      To learn more about the code command, you can run its --help function:

      You can also refer to the official Visual Studio Code command line documentation or take a look at our VS Code tag page for more Visual Studio Code tutorials, tech talks, and Q&A.



      Source link

      How To Install Suricata on Rocky Linux 8



      [**]

      Introduction

      Suricata is a Network Security Monitoring (NSM) tool that uses sets of community created and user defined signatures (also referred to as rules) to examine and process network traffic. Suricata can generate log events, trigger alerts, and drop traffic when it detects suspicious packets or requests to any number of different services running on a server.

      By default Suricata works as a passive Intrusion Detection System (IDS) to scan for suspicious traffic on a server or network. It will generate and log alerts for further investigation. It can also be configured as an active Intrusion Prevention System (IPS) to log, alert, and completely block network traffic that matches specific rules.

      You can deploy Suricata on a gateway host in a network to scan all incoming and outgoing network traffic from other systems, or you can run it locally on individual machines in either mode.

      In this tutorial you will learn how to install Suricata, and how to customize some of its default settings on Rocky Linux 8 to suit your needs. You will also learn how to download existing sets of signatures (usually referred to as rulesets) that Suricata uses to scan network traffic. Finally you’ll learn how to test whether Suricata is working correctly when it detects suspicious requests and data in a response.

      Prerequisites

      Depending on your network configuration and how you intend to use Suricata, you may need more or less CPU and RAM for your server. Generally, the more traffic you plan to inspect the more resources you should allocate to Suricata. In a production environment plan to use at least 2 CPUs and 4 or 8GB of RAM to start with. From there you can scale up resources according to Suricata’s performance and the amount of traffic that you need to process.

      If you plan to use Suricata to protect the server that it is running on, you will need:

      Otherwise, if you plan to use Suricata on a gateway host to monitor and protect multiple servers, you will need to ensure that the host’s networking is configured correctly.

      If you are using DigitalOcean you can follow this guide on How to Configure a Droplet as a VPC Gateway. Those instructions should work for most CentOS, Fedora, and other RedHat derived servers as well.

      Step 1 — Installing Suricata

      To get started installing Suricata, you will need to add the Open Information Security Foundation’s (OISF) software repository information to your Rocky Linux system. You can use the dnf copr enable command to do this. You will also need to add the Extra Packages for Enterprise Linux (EPEL) repository.

      To enable the Community Projects (copr) subcommand for the dnf package tool, run the following:

      • sudo dnf install 'dnf-command(copr)'

      You will be prompted to install some additional dependencies, as well as accept the GPG key for the Rocky Linux distribution. Press y and ENTER each time to finish installing the copr package.

      Next run the following command to add the OISF repository to your system and update the list of available packages:

      • sudo dnf copr enable @oisf/suricata-6.0

      Press y and ENTER when you are prompted to confirm that you want to add the repository.

      Now add the epel-release package, which will make some extra dependency packages available for Suricata:

      • sudo dnf install epel-release

      When you are prompted to import the GPG key, press y and ENTER to accept.

      Now that you have the required software repositories enabled, you can install the suricata package using the dnf command:

      • sudo dnf install suricata

      When you are prompted to add the GPG key for the OISF repository, press y and ENTER. The package and its dependencies will now be downloaded and installed.

      Next, enable the suricata.service so that it will run when your system restarts. Use the systemctl command to enable it:

      • sudo systemctl enable suricata.service

      You should receive output like the following indicating the service is enabled:

      Output

      Created symlink /etc/systemd/system/multi-user.target.wants/suricata.service → /usr/lib/systemd/system/suricata.service.

      Before moving on to the next section of this tutorial, which explains how to configure Suricata, stop the service using systemctl:

      • sudo systemctl stop suricata.service

      Stopping Suricata ensures that when you edit and test the configuration file, any changes that you make will be validated and loaded when Suricata starts up again.

      Step 2 — Configuring Suricata For The First Time

      The Suricata package from the OISF repositories ships with a configuration file that covers a wide variety of use cases. The default mode for Suricata is IDS mode, so no traffic will be dropped, only logged. Leaving this mode set to the default is a good idea as you learn Suricata. Once you have Suricata configured and integrated into your environment, and have a good idea of the kinds of traffic that it will alert you about, you can opt to turn on IPS mode.

      However, the default configuration still has a few settings that you may need to change depending on your environment and needs.

      Suricata can include a Community ID field in its JSON output to make it easier to match individual event records to records in datasets generated by other tools.

      If you plan to use Suricata with other tools like Zeek or Elasticsearch, adding the Community ID now is a good idea.

      To enable the option, open /etc/suricata/suricata.yaml using vi or your preferred editor:

      • sudo vi /etc/suricata/suricata.yaml

      Find line 120 which reads # Community Flow ID. If you are using vi type 120gg to go directly to the line. Below that line is the community-id key. Set it to true to enable the setting:

      /etc/suricata/suricata.yaml

      . . .
            # Community Flow ID
            # Adds a 'community_id' field to EVE records. These are meant to give
            # records a predictable flow ID that can be used to match records to
            # output of other tools such as Zeek (Bro).
            #
            # Takes a 'seed' that needs to be same across sensors and tools
            # to make the id less predictable.
      
            # enable/disable the community id feature.
            community-id: true
      . . .
      

      Now when you examine events, they will have an ID like 1:S+3BA2UmrHK0Pk+u3XH78GAFTtQ= that you can use to correlate records across different NMS tools.

      Save and close the /etc/suricata/suricata.yaml file. If you are using vi, you can do so with ESC and then :x then ENTER to save and exit the file.

      Determining Which Network Interface(s) To Use

      You may need to override the default network interface or interfaces that you would like Suricata to inspect traffic on. The configuration file that comes with the OISF Suricata package defaults to inspecting traffic on a device called eth0. If your system uses a different default network interface, or if you would like to inspect traffic on more than one interface, then you will need to change this value.

      To determine the device name of your default network interface, you can use the ip command as follows:

      • ip -p -j route show default

      The -p flag formats the output to be more readable, and the -j flag prints the output as JSON.

      You should receive output like the following:

      Output

      [ { "dst": "default", "gateway": "203.0.113.254", "dev": "eth0", "protocol": "static", "metric": 100, "flags": [ ] } ]

      The dev line indicates the default device. In this example output, the device is the highlighted eth0 interface. Your output may show a device name like ens... or eno.... Whatever the name is, make a note of it.

      Now you can edit Suricata’s configuration and verify or change the interface name. Open the /etc/suricata/suricata.yaml configuration file using vi or your preferred editor:

      • sudo vi /etc/suricata/suricata.yaml

      Scroll through the file until you come to a line that reads af-packet: around line 580. If you are using vi you can also go to the line directly by entering 580gg. Below that line is the default interface that Suricata will use to inspect traffic. Edit the line to match your interface like the highlighted example that follows:

      /etc/suriata/suricata.yaml

      # Linux high speed capture support
      af-packet:
        - interface: eth0
          # Number of receive threads. "auto" uses the number of cores
          #threads: auto
          # Default clusterid. AF_PACKET will load balance packets based on flow.
          cluster-id: 99
      . . .
      

      If you want to inspect traffic on additional interfaces, you can add more - interface: eth... YAML objects. For example, to add a device named enp0s1, scroll down to the bottom of the af-packet section to around line 650. To add a new interface, insert it before the - interface: default section like the following highlighted example:

      /ec/suricata/suricata.yaml

          #  For eBPF and XDP setup including bypass, filter and load balancing, please
          #  see doc/userguide/capture-hardware/ebpf-xdp.rst for more info.
      
        - interface: enp0s1
          cluster-id: 98
      
        - interface: default
          #threads: auto
          #use-mmap: no
          #tpacket-v3: yes
      

      Be sure to choose a unique cluster-id value for each - interface object.

      Keep your editor open and proceed to the next section where you will configure live rule reloading. If you do not want to enable that setting then you can save and close the /etc/suricata/suricata.yaml file. If you are using vi, you can do so with ESC, then :x and ENTER to save and quit.

      Configuring Live Rule Reloading

      Suricata supports live rule reloading, which means you can add, remove, and edit rules without needing to restart the running Suricata process. To enable the live reload option, scroll to the bottom of the configuration file and add the following lines:

      /etc/suricata/suricata.yaml

      . . .
      
      detect-engine:
        - rule-reload: true
      

      With this setting in place, you will be able to send the SIGUSR2 system signal to the running process, and Suricata will reload any changed rules into memory.

      A command like the following will notify the Suricata process to reload its rulesets, without restarting the process:

      • sudo kill -usr2 $(pidof suricata)

      The $(pidof suricata) portion of the command invokes a subshell, and finds the process ID of the running Suricata daemon. The beginning sudo kill -usr2 part of the command uses the kill utility to send the SIGUSR2 signal to the process ID that is reported back by the subshell.

      You can use this command any time you run suricata-update or when you add or edit your own custom rules.

      Save and close the /etc/suricata/suricata.yaml file. If you are using vi, you can do so with ESC, then :x and ENTER to confirm.

      Step 3 — Updating Suricata Rulesets

      At this point in the tutorial, if you were to start Suricata, you would receive a warning message like the following in the logs that there are no loaded rules:

      Output

      <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/suricata.rules

      By default the Suricata package includes a limited set of detection rules (in the /etc/suricata/rules directory), so turning Suricata on at this point would only detect a limited amount of bad traffic.

      Suricata includes a tool called suricata-update that can fetch rulesets from external providers. Run it as follows to download an up to date ruleset for your Suricata server:

      You should receive output like the following:

      Output

      19/10/2021 -- 19:31:03 - <Info> -- Using data-directory /var/lib/suricata. 19/10/2021 -- 19:31:03 - <Info> -- Using Suricata configuration /etc/suricata/suricata.yaml 19/10/2021 -- 19:31:03 - <Info> -- Using /usr/share/suricata/rules for Suricata provided rules. . . . 19/10/2021 -- 19:31:03 - <Info> -- No sources configured, will use Emerging Threats Open 19/10/2021 -- 19:31:03 - <Info> -- Fetching https://rules.emergingthreats.net/open/suricata-6.0.3/emerging.rules.tar.gz. 100% - 3062850/3062850 . . . 19/10/2021 -- 19:31:06 - <Info> -- Writing rules to /var/lib/suricata/rules/suricata.rules: total: 31011; enabled: 23649; added: 31011; removed 0; modified: 0 19/10/2021 -- 19:31:07 - <Info> -- Writing /var/lib/suricata/rules/classification.config 19/10/2021 -- 19:31:07 - <Info> -- Testing with suricata -T. 19/10/2021 -- 19:31:32 - <Info> -- Done.

      The highlighted lines indicate suricata-update has fetched the free Emerging Threats ET Open Rules, and saved them to Suricata’s /var/lib/suricata/rules/suricata.rules file. It also indicates the number of rules that were processed, in this example, 31011 were added and of those 23649 were enabled.

      Adding Ruleset Providers

      The suricata-update tool can fetch rules from a variety of free and commercial ruleset providers. Some rulesets like the ET Open set that you already added are available for free, while others require a paid subscription.

      You can list the default set of rule providers using the list-sources flag to suricata-update like this:

      • sudo suricata-update list-sources

      You will receive a list of sources like the following:

      Output

      . . . 19/10/2021 -- 19:27:34 - <Info> -- Adding all sources 19/10/2021 -- 19:27:34 - <Info> -- Saved /var/lib/suricata/update/cache/index.yaml Name: et/open Vendor: Proofpoint Summary: Emerging Threats Open Ruleset License: MIT . . .

      For example, if you wanted to include the tgreen/hunting ruleset, you could enable it using the following command:

      • sudo suricata-update enable-source tgreen/hunting

      Then run suricata-update again and the new set of rules will be added, in addition to the existing ET Open rules and any others that you have downloaded.

      Step 4 — Validating Suricata’s Configuration

      Now that you have edited Suricata’s configuration file to include the optional Community ID, specify the default network interface, and enabled live rule reloading, it is a good idea to test the configuration.

      Suricata has a built-in test mode that will check the configuration file and any included rules for validity. Validate your changes from the previous section using the -T flag to run Suricata in test mode. The -v flag will print some additional information, and the -c flag tells Suricata where to find its configuration file:

      • sudo suricata -T -c /etc/suricata/suricata.yaml -v

      The test can take some time depending on the amount of CPU you have allocated to Suricata and the number of rules that you have added, so be prepared to wait for a minute or two for it to complete.

      With the default ET Open ruleset you should receive output like the following:

      Output

      21/10/2021 -- 15:00:40 - <Info> - Running suricata under test mode 21/10/2021 -- 15:00:40 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode 21/10/2021 -- 15:00:40 - <Info> - CPUs/cores online: 2 21/10/2021 -- 15:00:40 - <Info> - fast output device (regular) initialized: fast.log 21/10/2021 -- 15:00:40 - <Info> - eve-log output device (regular) initialized: eve.json 21/10/2021 -- 15:00:40 - <Info> - stats output device (regular) initialized: stats.log 21/10/2021 -- 15:00:46 - <Info> - 1 rule files processed. 23879 rules successfully loaded, 0 rules failed 21/10/2021 -- 15:00:46 - <Info> - Threshold config parsed: 0 rule(s) found 21/10/2021 -- 15:00:47 - <Info> - 23882 signatures processed. 1183 are IP-only rules, 4043 are inspecting packet payload, 18453 inspect application layer, 107 are decoder event only 21/10/2021 -- 15:01:13 - <Notice> - Configuration provided was successfully loaded. Exiting. 21/10/2021 -- 15:01:13 - <Info> - cleaning up signature grouping structure... complete

      If there is an error in your configuration file, then the test mode will generate a specific error code and message that you can use to help troubleshoot. For example, including a rules file that does not exist called test.rules would generate an error like the following:

      Output

      21/10/2021 -- 15:10:15 - <Info> - Running suricata under test mode 21/10/2021 -- 15:10:15 - <Notice> - This is Suricata version 6.0.3 RELEASE running in SYSTEM mode 21/10/2021 -- 15:10:15 - <Info> - CPUs/cores online: 2 21/10/2021 -- 15:10:15 - <Info> - eve-log output device (regular) initialized: eve.json 21/10/2021 -- 15:10:15 - <Info> - stats output device (regular) initialized: stats.log 21/10/2021 -- 15:10:21 - <Warning> - [ERRCODE: SC_ERR_NO_RULES(42)] - No rule files match the pattern /var/lib/suricata/rules/test.rules

      With that error you could then edit your configuration file to include the correct path, or fix invalid variables and configuration options.

      Once your Suricata test mode run completes successfully you can move to the next step, which is starting Suricata in daemon mode.

      Step 5 — Running Suricata

      Now that you have a valid Suricata configuration and ruleset, you can start the Suricata server. Run the following systemctl command:

      • sudo systemctl start suricata.service

      You can examine the status of the service using the systemctl status command:

      • sudo systemctl status suricata.service

      You should receive output like the following:

      Output

      ● suricata.service - Suricata Intrusion Detection Service Loaded: loaded (/usr/lib/systemd/system/suricata.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2021-10-21 18:22:56 UTC; 1min 57s ago Docs: man:suricata(1) Process: 24588 ExecStartPre=/bin/rm -f /var/run/suricata.pid (code=exited, status=0/SUCCESS) Main PID: 24590 (Suricata-Main) Tasks: 1 (limit: 23473) Memory: 80.2M CGroup: /system.slice/suricata.service └─24590 /sbin/suricata -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid -i eth0 --user suricata Oct 21 18:22:56 suricata systemd[1]: Starting Suricata Intrusion Detection Service.. Oct 21 18:22:56 suricata systemd[1]: Started Suricata Intrusion Detection Service. . . .

      As with the test mode command, it will take Suricata a minute or two to load and parse all of the rules. You can use the tail command to watch for a specific message in Suricata’s logs that indicates it has finished starting:

      • sudo tail -f /var/log/suricata/suricata.log

      You will receive a number of lines of output, and the terminal may appear to be stuck while Suricata loads. Continue waiting for output until you receive a line like the following:

      Output

      19/10/2021 -- 19:22:39 - <Info> - All AFP capture threads are running.

      This line indicates Suricata is running and ready to inspect traffic. You can exit the tail command using CTRL+C.

      Now that you have verified that Suricata is running, the next step in this tutorial is to check whether Suricata detects a request to a test URL that is designed to generate an alert.

      Step 6 — Testing Suricata Rules

      The ET Open ruleset that you downloaded contains over 30000 rules. A full explanation of how Suricata rules work, and how to construct them is beyond the scope of this introductory tutorial. A subsequent tutorial in this series will explain how rules work and how to build your own.

      For the purposes of this tutorial, testing whether Suricata is detecting suspicious traffic with the configuration that you generated is sufficient. The Suricata Quickstart recommends testing the ET Open rule with number 2100498 using the curl command.

      Run the following to generate an HTTP request, which will return a response that matches Suricata’s alert rule:

      • curl http://testmynids.org/uid/index.html

      The curl command will output a response like the following:

      Output

      uid=0(root) gid=0(root) groups=0(root)

      This example response data is designed to trigger an alert, by pretending to return the output of a command like id that might run on a compromised remote system via a web shell.

      Now you can check Suricata’s logs for a corresponding alert. There are two logs that are enabled with the default Suricata configuration. The first is in /var/log/suricata/fast.log and the second is a machine readable log in /var/log/suricata/eve.log.

      Examining /var/log/suricata/fast.log

      To check for a log entry in /var/log/suricata/fast.log that corresponds to your curl request use the grep command. Using the 2100498 rule identifier from the Quickstart documentation, search for entries that match it using the following command:

      • grep 2100498 /var/log/suricata/fast.log

      If your request used IPv6, then you should receive output like the following, where 2001:DB8::1 is your system’s public IPv6 address:

      [secondary_label] Output
      10/21/2021-18:35:54.950106  [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 2600:9000:2000:4400:0018:30b3:e400:93a1:80 -> 2001:DB8::1:34628
      

      If your request used IPv4, then your log should have a message like this, where 203.0.113.1 is your system’s public IPv4 address:

      [secondary_label] Output
      10/21/2021-18:35:57.247239  [**] [1:2100498:7] GPL ATTACK_RESPONSE id check returned root [**] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} 204.246.178.81:80 -> 203.0.113.1:36364
      

      Note the highlighted 2100498 value in the output, which is the Signature ID (sid) that Suricata uses to identify a rule.

      Examining /var/log/suricata/eve.log

      Suricata also logs events to /var/log/suricata/eve.log (nicknamed the EVE log) using JSON to format entries.

      The Suricata documentation recommends using the jq utility to read and filter the entries in this file. Install jq if you do not have it on your system using the following dnf command:

      Once you have jq installed, you can filter the events in the EVE log by searching for the 2100498 signature with the following command:

      • jq 'select(.alert .signature_id==2100498)' /var/log/suricata/eve.json

      The command examines each JSON entry and prints any that have an alert object, with a signature_id key that matches the 2100498 value that you are searching for. The output will resemble the following:

      Output

      { "timestamp": "2021-10-21T19:42:47.368856+0000", "flow_id": 775889108832281, "in_iface": "eth0", "event_type": "alert", "src_ip": "203.0.113.1", "src_port": 80, "dest_ip": "147.182.148.159", "dest_port": 38920, "proto": "TCP", "community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=", "alert": { "action": "allowed", "gid": 1, "signature_id": 2100498, "rev": 7, "signature": "GPL ATTACK_RESPONSE id check returned root", "category": "Potentially Bad Traffic", . . . }

      Note the highlighted "signature_id": 2100498, line, which is the key that jq is searching for. Also note the highlighted "community_id": "1:vuSfAFyy7oUq0LQC5+KNTBSuPxg=", line in the JSON output. This key is the generated Community Flow Identifier that you enabled in Suricata’s configuration file.

      Each alert will generate a unique Community Flow Identifier. Other NMS tools can also generate the same identifier to enable cross-referencing a Suricata alert with output from other tools.

      A matching log entry in either log file means that Suricata successfully inspected the network traffic, matched it against a detection rule, and generated an alert for subsequent analysis or logging. A future tutorial in this series will explore how to send Suricata alerts to a Security Incident Event Management (SIEM) system for further processing.

      Step 7 — Handling Suricata Alerts

      Once you have alerts set up and tested, you can choose how you want to handle them. For some use cases, logging alerts for auditing purposes may be sufficient; or you may prefer to take a more active approach to blocking traffic from systems that generate repeated alerts.

      If you would like to block traffic based on the alerts that Suricata generates, one approach is to use entries from the EVE log and then add firewall rules to restrict access to your system or systems. You can use the jq tool to extract specific fields from an alert, and then add UFW or IPtables rules to block requests.

      Again, this example is a hypothetical scenario using deliberately crafted request and response data. Your knowledge of the systems and protocols that your environment should be able to access is essential in order to determine which traffic is legitimate and which can be blocked.

      Conclusion

      In this tutorial you installed Suricata from the OISF software repositories. Installing Suricata this way ensures that you can receive updates whenever a new version of Suricata is released. After installing Suricata you edited the default configuration to add a Community Flow ID for use with other security tools. You also enabled live rule reloading, and downloaded an initial set of rules.

      Once you validated Suricata’s configuration, you started the process and generated some test HTTP traffic. You verified that Suricata could detect suspicious traffic by examining both of the default logs to make sure they contained an alert corresponding to the rule you were testing.

      For more information about Suricata, visit the official Suricata Site. For more details on any of the configuration options that you configured in this tutorial, refer to the Suricata User Guide.

      Now that you have Suricata installed and configured, you can continue to the next tutorial in this series (forthcoming) where you’ll explore how to write your own custom Suricata rules. You’ll learn about different ways to create alerts, or even how to drop traffic entirely, based on criteria like invalid TCP/IP packets, the contents of DNS queries, HTTP requests and responses, and even TLS handshakes.



      Source link

      How To Install Fathom Analytics on Ubuntu 20.04


      Introduction

      Fathom Analytics is an open-source, self-hosted web analytics application that focuses on simplicity and privacy. It is written in Go and ships as a single binary file, making installation relatively straightforward.

      In this tutorial you will install and configure Fathom, then install Nginx to act as a reverse proxy for the Fathom app. Finally, you will enable secure HTTPS connections by using Certbot to download and configure SSL certificates from the Let’s Encrypt Certificate Authority.

      Prerequisites

      In order to complete this tutorial, you’ll first need the following:

      • An Ubuntu 20.04 server, with the UFW firewall enabled and a non-root user with sudo privileges configured. Please read our Initial Server Setup with Ubuntu 20.04 to learn more about setting up these requirements
      • A domain name pointed at your server’s public IP address. This should be something like example.com or fathom.example.com, for instance. If you’re using DigitalOcean, please see our DNS Quickstart for information on creating domain resources in our control panel.

      When you’ve satisfied all the prerequisites, proceed to Step 1, where you’ll download and install Fathom.

      Step 1 — Downloading Fathom

      To install the Fathom software, you’ll first download the latest release, then extract the executable file to the /usr/local/bin directory.

      First, move to a directory you can write to. The /tmp directory is a good choice:

      In your web browser, visit the GitHub page for Fathom’s latest software release, then find the file with a name like fathom_1.2.1_linux_amd64.tar.gz. The version number may be different.

      Right-click on the link to the file, then select Copy Link (or similar, depending on your browser).

      Use the curl command to download the file from the link you just copied:

      • curl -L -O https://github.com/usefathom/fathom/releases/download/v1.2.1/fathom_1.2.1_linux_amd64.tar.gz

      You should now have a fathom_1.2.1_linux_amd64.tar.gz file in your /tmp directory. Use the tar command to extract the fathom executable and move it to /usr/local/bin:

      • sudo tar -C /usr/local/bin/ -xzf fathom*.tar.gz fathom

      The sudo command is necessary because /usr/local/bin is a protected directory, so you need superuser privileges to write to it.

      Now use sudo and chmod to update the permissions of the fathom binary:

      • sudo chmod +x /usr/local/bin/fathom

      This makes fathom executable. To test it out, run fathom --version:

      Output

      Fathom version 1.2.1, commit 8f7c6d2e45ebb28651208e2a7320e29948ecdb2c, built at 2018-11-30T09:21:37Z

      The command will print out Fathom’s version number and some additional details. You’ve successfully downloaded and installed the Fathom binary. Next you’ll configure and run Fathom for the first time.

      Step 2 — Configuring and Running Fathom

      Before configuring Fathom you’re going to create a new fathom user on your system. This new user account will be used to run the Fathom server, which will help isolate and secure the service.

      Make a new user named fathom with the adduser command:

      • sudo adduser --system --group --home /opt/fathom fathom

      This creates a special --system user, meaning it has no password and cannot log in like a normal user could. We also make a fathom group using the --group flag, and a home directory in /opt/fathom.

      Move to the fathom user’s home directory now:

      Now we have to execute a few commands that need to be run as the fathom user. To do this, open a bash shell as the fathom user using sudo:

      Your prompt will change to something like fathom@host:~$. Until we exit this shell, every command we run will be run as the fathom user.

      Now you’re ready to set up a configuration file for Fathom. One item we’ll need in this configuration file is a random string that Fathom will use for signing and encryption purposes. Use the openssl command to generate a random string now:

      Output

      iKo/rYHFa2hDINjgCcIeeCe9pNglQreQrzrs+qK5tYg=

      Copy the string to your clipboard, or note it down on a temporary scratch document of some sort, then open a new .env file for the configuration:

      This will open a new blank file in the nano text editor. Feel free to use your favorite editor instead.

      Paste the following into the file, making sure to update the random string to the one you generated previously:

      /opt/fathom/.env

      FATHOM_SERVER_ADDR="127.0.0.1:8080"
      FATHOM_DATABASE_DRIVER="sqlite3"
      FATHOM_DATABASE_NAME="fathom.db"
      FATHOM_SECRET="your_random_string_here"
      

      This configuration first specifies that the server should only listen on localhost (127.0.0.1) port 8080, and that it should use an SQLite database file called fathom.db.

      Save and close the file. In nano you can press CTRL+O then ENTER to save, then press CTRL+X to exit.

      Now that the database is configured, we can add the first user to our Fathom instance:

      • fathom user add --email="your_email" --password="your_password"

      Since this is the first time you’re running fathom with the database configured, you should notice some initial database migrations happening:

      Output

      INFO[0000] Fathom version 1.2.1, commit 8f7c6d2e45ebb28651208e2a7320e29948ecdb2c, built at 2018-11-30T09:21:37Z INFO[0000] Configuration file: /opt/fathom/.env INFO[0000] Connected to sqlite3 database: /opt/fathom/fathom.db INFO[0000] Applied 26 database migrations! INFO[0000] Created user sammy@example.com

      Your fathom.db database file is now created and the user is added.

      Start the Fathom server now to test it out:

      Output

      INFO[0000] Fathom version 1.2.1, commit 8f7c6d2e45ebb28651208e2a7320e29948ecdb2c, built at 2018-11-30T09:21:37Z INFO[0000] Configuration file: /opt/fathom/.env INFO[0000] Connected to sqlite3 database: /opt/fathom/fathom.db

      In a second terminal connected to your server, fetch the homepage of your Fathom instance using curl:

      Output

      <!DOCTYPE html> <html class="no-js" lang="en"> <head> <title>Fathom - simple website analytics</title> <link href="https://www.digitalocean.com/community/tutorials/assets/css/styles.css?t=1543569696966" rel="stylesheet"> . . .

      You should see a few lines of HTML code printed to your screen. This shows that the server is up and responding to requests on localhost.

      Back in your first terminal, exit the fathom server process by pressing CTRL+C.

      You’re all done running commands as the fathom user, so exit that session as well:

      Your shell prompt should return to normal.

      Fathom is now fully configured and you’ve successfully run it manually from the command line. Next we’ll set Fathom up to run as a Systemd service.

      Step 3 — Setting Up Fathom as a Systemd Service

      To run fathom serve at all times, even when you’re not logged into the server, you’ll set it up as a service with Systemd. Systemd is a service manager that handles starting, stopping, and restarting services on Ubuntu and many other Linux distributions.

      The fathom.service file you create will contain all the configuration details that Systemd needs to properly run the server. Open the new file now:

      • sudo nano /etc/systemd/system/fathom.service

      Add the following into the file:

      /etc/systemd/system/fathom.service

      [Unit]
      Description=Fathom Analytics server
      Requires=network.target
      After=network.target
      
      [Service]
      Type=simple
      User=fathom
      Group=fathom
      Restart=always
      RestartSec=3
      WorkingDirectory=/opt/fathom
      ExecStart=/usr/local/bin/fathom server
      
      [Install]
      WantedBy=multi-user.target
      

      This file specifies when the service should be launched (After=network.target, meaning after the network is up), that it should be run as the fathom user and group, that Systemd should always try to restart the process if it exits (Restart=always), that it should be run from the /opt/fathom directory, and what command to use to run the server (ExecStart=/usr/local/bin/fathom server).

      Save and close the file. Reload the Systemd config:

      • sudo systemctl daemon-reload

      Enable the service:

      • sudo systemctl enable fathom.service

      Enabling the service means that Systemd will start it automatically during system startup. We’ll also need to start the service manually now, just this once:

      • sudo systemctl start fathom

      Note in the previous command that you can leave off the .service portion of the service name. Finally, check the status of the service to make sure it’s running:

      • sudo systemctl status fathom

      Output

      ● fathom.service - Fathom Analytics server Loaded: loaded (/etc/systemd/system/fathom.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2021-11-03 15:32:45 UTC; 13s ago Main PID: 3748 (fathom) Tasks: 5 (limit: 1136) Memory: 10.3M CGroup: /system.slice/fathom.service └─3748 /usr/local/bin/fathom server Nov 03 15:32:45 ubuntu-fathom systemd[1]: Started Fathom Analytics server. Nov 03 15:32:46 ubuntu-fathom fathom[3748]: time="2021-11-03T15:32:46Z" level=info msg="Fathom version 1.2.1, commit 8f> Nov 03 15:32:46 ubuntu-fathom fathom[3748]: time="2021-11-03T15:32:46Z" level=info msg="Configuration file: /opt/fathom> Nov 03 15:32:46 ubuntu-fathom fathom[3748]: time="2021-11-03T15:32:46Z" level=info msg="Connected to sqlite3 database: >

      The service is up and running again on localhost port 8080. Next we’ll set up Nginx as a reverse proxy to expose the Fathom service to the outside world.

      Step 4 — Installing and Configuring Nginx

      Putting a web server such as Nginx in front of your application server can improve performance by offloading caching, compression, and static file serving to a more efficient process. We’re going to install Nginx and configure it to reverse proxy requests to Fathom, meaning it will take care of handing requests from your users to Fathom and back again.

      First, refresh your package list, then install Nginx using apt:

      • sudo apt update
      • sudo apt install nginx

      Allow public traffic to ports 80 and 443 (HTTP and HTTPS) using the “Nginx Full” UFW application profile:

      • sudo ufw allow "Nginx Full"

      Output

      Rule added Rule added (v6)

      Next, open up a new Nginx configuration file in the /etc/nginx/sites-available directory. We’ll call ours fathom.conf but you could use a different name:

      • sudo nano /etc/nginx/sites-available/fathom.conf

      Paste the following into the new configuration file, being sure to replace your_domain_here with the domain that you’ve configured to point to your Fathom server. This should be something like fathom.example.com, for instance:

      /etc/nginx/sites-available/fathom.conf

      server {
          listen       80;
          listen       [::]:80;
          server_name  your_domain_here;
      
          access_log  /var/log/nginx/fathom.access.log;
          error_log   /var/log/nginx/fathom.error.log;
      
          location / {
            proxy_pass http://localhost:8080;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $remote_addr;
            proxy_set_header Host $host;
        }
      }
      

      This configuration is HTTP-only for now. We’ll let Certbot take care of configuring SSL in the next step. The rest of the config sets up logging locations and then passes all traffic along to our Fathom server at http://localhost:8080, adding a few important proxy forwarding headers along the way.

      Save and close the file, then enable the configuration by linking it into /etc/nginx/sites-enabled/:

      • sudo ln -s /etc/nginx/sites-available/fathom.conf /etc/nginx/sites-enabled/

      Use nginx -t to verify that the configuration file syntax is correct:

      Output

      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      And finally, reload the nginx service to pick up the new configuration:

      • sudo systemctl reload nginx

      Your Fathom site should now be available on plain HTTP. Load http://your_domain_here and it will look like this:

      A screenshot of the Fathom login page, with 'Email' and 'Password' textboxes

      Now that you have your site up and running over HTTP, it’s time to secure the connection with Certbot and Let’s Encrypt certificates.

      Step 5 — Installing Certbot and Setting Up SSL Certificates

      Thanks to Certbot and the Let’s Encrypt free certificate authority, adding SSL encryption to our Fathom app will take only two commands.

      First, install Certbot and its Nginx plugin:

      • sudo apt install certbot python3-certbot-nginx

      Next, run certbot in --nginx mode, and specify the same domain you used in the Nginx server_name config:

      • sudo certbot --nginx -d your_domain_here

      You’ll be prompted to agree to the Let’s Encrypt terms of service, and to enter an email address.

      Afterwards, you’ll be asked if you want to redirect all HTTP traffic to HTTPS. It’s up to you, but this is generally recommended and safe to do.

      After that, Let’s Encrypt will confirm your request and Certbot will download your certificate:

      Output

      Congratulations! You have successfully enabled https://Fathom.example.com You should test your configuration at: https://www.ssllabs.com/ssltest/analyze.html?d=Fathom.example.com - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/Fathom.example.com/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/Fathom.example.com/privkey.pem Your cert will expire on 2021-12-06. To obtain a new or tweaked version of this certificate in the future, simply run certbot again with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le

      Certbot will automatically reload Nginx to pick up the new configuration and certificates. Reload your site and it should switch you over to HTTPS automatically if you chose the redirect option.

      Your site is now secure and it’s safe to log in with the user details you set up in Step 2.

      When you successfully log in, you’ll see a prompt to get your first website set up with Fathom:

      A screenshot of the Fathom initial setup workflow, asking for the domain of your website

      Once that is done you’ll see the (currently empty) dashboard for the site you just set up:

      A screenshot of the Fathom dashboard, showing no data yet

      You have successfully installed and secured your Fathom analytics software.

      Conclusion

      In this tutorial, you downloaded, installed, and configured the Fathom Analytics app, then set up an Nginx reverse proxy and secured it using Let’s Encrypt SSL certificates.

      You’re now ready to finish setting up your website by adding the Fathom Analytics tracking script to it. Please see the official Fathom Analytics documentation for further information on using the software and setting up your site.



      Source link