One place for hosting & domains

      Monitoring

      How to Set Up DigitalOcean Kubernetes Cluster Monitoring with Helm and Prometheus Operator


      Introduction

      Along with tracing and logging, monitoring and alerting are essential components of a Kubernetes observability stack. Setting up monitoring for your Kubernetes cluster allows you to track your resource usage and analyze and debug application errors.

      A monitoring system usually consists of a time-series database that houses metric data and a visualization layer. In addition, an alerting layer creates and manages alerts, handing them off to integrations and external services as necessary. Finally, one or more components generate or expose the metric data that will be stored, visualized, and processed for alerts by this monitoring stack.

      One popular monitoring solution is the open-source Prometheus, Grafana, and Alertmanager stack:

      • Prometheus is a time series database and monitoring tool that works by polling metrics endpoints and scraping and processing the data exposed by these endpoints. It allows you to query this data using PromQL, a time series data query language.
      • Grafana is a data visualization and analytics tool that allows you to build dashboards and graphs for your metrics data.
      • Alertmanager, usually deployed alongside Prometheus, forms the alerting layer of the stack, handling alerts generated by Prometheus and deduplicating, grouping, and routing them to integrations like email or PagerDuty.

      In addition, tools like kube-state-metrics and node_exporter expose cluster-level Kubernetes object metrics as well as machine-level metrics like CPU and memory usage.

      Implementing this monitoring stack on a Kubernetes cluster can be complicated, but luckily some of this complexity can be managed with the Helm package manager and CoreOS’s Prometheus Operator and kube-prometheus projects. These projects bake in standard configurations and dashboards for Prometheus and Grafana, and abstract away some of the lower-level Kubernetes object definitions. The Helm prometheus-operator chart allows you to get a full cluster monitoring solution up and running by installing Prometheus Operator and the rest of the components listed above, along with a default set of dashboards, rules, and alerts useful for monitoring Kubernetes clusters.

      In this tutorial, we will demonstrate how to install the prometheus-operator Helm chart on a DigitalOcean Kubernetes cluster. By the end of the tutorial, you will have installed a full monitoring stack into your cluster.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Creating a Custom Values File

      Before we install the prometheus-operator Helm chart, we’ll create a custom values file that will override some of the chart’s defaults with DigitalOcean-specific configuration parameters. To learn more about overriding default chart values, consult the Helm Install section of the Helm docs.

      To begin, create and open a file called custom-values.yaml on your local machine using nano or your favorite editor:

      Copy and paste in the following custom values, which enable persistent storage for the Prometheus, Grafana, and Alertmananger components, and disable monitoring for Kubernetes control plane components not exposed on DigitalOcean Kubernetes:

      custom-values.yaml

      # Define persistent storage for Prometheus (PVC)
      prometheus:
        prometheusSpec:
          storageSpec:
            volumeClaimTemplate:
              spec:
                accessModes: ["ReadWriteOnce"]
                storageClassName: do-block-storage
                resources:
                  requests:
                    storage: 5Gi
      
      # Define persistent storage for Grafana (PVC)
      grafana:
        # Set password for Grafana admin user
        adminPassword: your_admin_password
        persistence:
          enabled: true
          storageClassName: do-block-storage
          accessModes: ["ReadWriteOnce"]
          size: 5Gi
      
      # Define persistent storage for Alertmanager (PVC)
      alertmanager:
        alertmanagerSpec:
          storage:
            volumeClaimTemplate:
              spec:
                accessModes: ["ReadWriteOnce"]
                storageClassName: do-block-storage
                resources:
                  requests:
                    storage: 5Gi
      
      # Change default node-exporter port
      prometheus-node-exporter:
        service:
          port: 30206
          targetPort: 30206
      
      # Disable Etcd metrics
      kubeEtcd:
        enabled: false
      
      # Disable Controller metrics
      kubeControllerManager:
        enabled: false
      
      # Disable Scheduler metrics
      kubeScheduler:
        enabled: false
      

      In this file, we override some of the default values packaged with the chart in its values.yaml file.

      We first enable persistent storage for Prometheus, Grafana, and Alertmanager so that their data persists across Pod restarts. Behind the scenes, this defines a 5 Gi Persistent Volume Claim (PVC) for each component, using the DigitalOcean Block Storage storage class. You should modify the size of these PVCs to suit your monitoring storage needs. To learn more about PVCs, consult Persistent Volumes from the official Kubernetes docs.

      Next, replace your_admin_password with a secure password that you'll use to log in to the Grafana metrics dashboard with the admin user.

      We'll then configure a different port for node-exporter. Node-exporter runs on each Kubernetes node and provides OS and hardware metrics to Prometheus. We must change its default port to get around the DigitalOcean Kubernetes firewall defaults, which will block port 9100 but allow ports in the range 30000-32767. Alternatively, you can configure a custom firewall rule for node-exporter. To learn how, consult How to Configure Firewall Rules from the official DigitalOcean Cloud Firewalls docs.

      Finally, we'll disable metrics collection for three Kubernetes control plane components that do not expose metrics on DigitalOcean Kubernetes: the Kubernetes Scheduler and Controller Manager, and etcd cluster data store.

      To see the full list of configurable parameters for the prometheus-operator chart, consult the Configuration section from the chart repo README or the default values file.

      When you're done editing, save and close the file. We can now install the chart using Helm.

      Step 2 — Installing the prometheus-operator Chart

      The prometheus-operator Helm chart will install the following monitoring components into your DigitalOcean Kubernetes cluster:

      • Prometheus Operator, a Kubernetes Operator that allows you to configure and manage Prometheus clusters. Kubernetes Operators integrate domain-specific logic into the process of packaging, deploying, and managing applications with Kubernetes. To learn more about Kubernetes Operators, consult the CoreOS Operators Overview. To learn more about Prometheus Operator, consult this introductory post on the Prometheus Operator and the Prometheus Operator GitHub repo. Prometheus Operator will be installed as a Deployment.
      • Prometheus, installed as a StatefulSet.
      • Alertmanager, a service that handles alerts sent by the Prometheus server and routes them to integrations like PagerDuty or email. To learn more about Alertmanager, consult Alerting from the Prometheus docs. Alertmanager will be installed as a StatefulSet.
      • Grafana, a time series data visualization tool that allows you to visualize and create dashboards for your Prometheus metrics. Grafana will be installed as a Deployment.
      • node-exporter, a Prometheus exporter that runs on cluster nodes and provides OS and hardware metrics to Prometheus. Consult the node-exporter GitHub repo to learn more. node-exporter will be installed as a DaemonSet.
      • kube-state-metrics, an add-on agent that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects like Deployments and Pods. You can learn more by consulting the kube-state-metrics GitHub repo. kube-state-metrics will be installed as a Deployment.

      By default, along with scraping metrics generated by node-exporter, kube-state-metrics, and the other components listed above, Prometheus will be configured to scrape metrics from the following components:

      • kube-apiserver, the Kubernetes API server.
      • CoreDNS, the Kubernetes cluster DNS server.
      • kubelet, the primary node agent that interacts with kube-apiserver to manage Pods and containers on a node.
      • cAdvisor, a node agent that discovers running containers and collects their CPU, memory, filesystem, and network usage metrics.

      On your local machine, let's begin by installing the prometheus-operator Helm chart and passing in the custom values file we created above:

      • helm install --namespace monitoring --name doks-cluster-monitoring -f custom-values.yaml stable/prometheus-operator

      Here we run helm install and install all components into the monitoring namespace, which we create at the same time. This allows us to cleanly separate the monitoring stack from the rest of the Kubernetes cluster. We name the Helm release doks-cluster-monitoring and pass in the custom values file we created in Step 1. Finally, we specify that we'd like to install the prometheus-operator chart from the Helm stable directory.

      You should see the following output:

      Output

      NAME: doks-cluster-monitoring LAST DEPLOYED: Mon Apr 22 10:30:42 2019 NAMESPACE: monitoring STATUS: DEPLOYED RESOURCES: ==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE doks-cluster-monitoring-grafana Pending do-block-storage 10s ==> v1/ServiceAccount NAME SECRETS AGE doks-cluster-monitoring-grafana 1 10s doks-cluster-monitoring-kube-state-metrics 1 10s . . . ==> v1beta1/ClusterRoleBinding NAME AGE doks-cluster-monitoring-kube-state-metrics 9s psp-doks-cluster-monitoring-prometheus-node-exporter 9s NOTES: The Prometheus Operator has been installed. Check its status by running: kubectl --namespace monitoring get pods -l "release=doks-cluster-monitoring" Visit https://github.com/coreos/prometheus-operator for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.

      This indicates that Prometheus Operator, Prometheus, Grafana, and the other components listed above have successfully been installed into your DigitalOcean Kubernetes cluster.

      Following the note in the helm install output, check the status of the release's Pods using kubectl get pods:

      • kubectl --namespace monitoring get pods -l "release=doks-cluster-monitoring"

      You should see the following:

      Output

      NAME READY STATUS RESTARTS AGE doks-cluster-monitoring-grafana-9d7f984c5-hxnw6 2/2 Running 0 3m36s doks-cluster-monitoring-kube-state-metrics-dd8557f6b-9rl7j 1/1 Running 0 3m36s doks-cluster-monitoring-pr-operator-9c5b76d78-9kj85 1/1 Running 0 3m36s doks-cluster-monitoring-prometheus-node-exporter-2qvxw 1/1 Running 0 3m36s doks-cluster-monitoring-prometheus-node-exporter-7brwv 1/1 Running 0 3m36s doks-cluster-monitoring-prometheus-node-exporter-jhdgz 1/1 Running 0 3m36s

      This indicates that all the monitoring components are up and running, and you can begin exploring Prometheus metrics using Grafana and its preconfigured dashboards.

      Step 3 — Accessing Grafana and Exploring Metrics Data

      The prometheus-operator Helm chart exposes Grafana as a ClusterIP Service, which means that it's only accessible via a cluster-internal IP address. To access Grafana outside of your Kubernetes cluster, you can either use kubectl patch to update the Service in place to a public-facing type like NodePort or LoadBalancer, or kubectl port-forward to forward a local port to a Grafana Pod port.

      In this tutorial we'll forward ports, but to learn more about kubectl patch and Kubernetes Service types, you can consult Update API Objects in Place Using kubectl patch and Services from the official Kubernetes docs.

      Begin by listing running Services in the monitoring namespace:

      • kubectl get svc -n monitoring

      You should see the following Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 34m doks-cluster-monitoring-grafana ClusterIP 10.245.105.130 <none> 80/TCP 34m doks-cluster-monitoring-kube-state-metrics ClusterIP 10.245.140.151 <none> 8080/TCP 34m doks-cluster-monitoring-pr-alertmanager ClusterIP 10.245.197.254 <none> 9093/TCP 34m doks-cluster-monitoring-pr-operator ClusterIP 10.245.14.163 <none> 8080/TCP 34m doks-cluster-monitoring-pr-prometheus ClusterIP 10.245.201.173 <none> 9090/TCP 34m doks-cluster-monitoring-prometheus-node-exporter ClusterIP 10.245.72.218 <none> 30206/TCP 34m prometheus-operated ClusterIP None <none> 9090/TCP 34m

      We are going to forward local port 8000 to port 80 of the doks-cluster-monitoring-grafana Service, which will in turn forward to port 3000 of a running Grafana Pod. These Service and Pod ports are configured in the stable/grafana Helm chart values file:

      • kubectl port-forward -n monitoring svc/doks-cluster-monitoring-grafana 8000:80

      You should see the following output:

      Output

      Forwarding from 127.0.0.1:8000 -> 3000 Forwarding from [::1]:8000 -> 3000

      This indicates that local port 8000 is being forwarded successfully to a Grafana Pod.

      Visit http://localhost:8000 in your web browser. You should see the following Grafana login page:

      Grafana Login Page

      Enter admin as the username and the password you configured in custom-values.yaml. Then, hit Log In.

      You'll be brought to the following Home Dashboard:

      Grafana Home Page

      In the left-hand navigation bar, select the Dashboards button, then click on Manage:

      Grafana Dashboard Tab

      You'll be brought to the following dashboard management interface, which lists the dashboards installed by the prometheus-operator Helm chart:

      Grafana Dashboard List

      These dashboards are generated by kubernetes-mixin, an open-source project that allows you to create a standardized set of cluster monitoring Grafana dashboards and Prometheus alerts. To learn more, consult the Kubernetes Mixin GitHub repo.

      Click in to the Kubernetes / Nodes dashboard, which visualizes CPU, memory, disk, and network usage for a given node:

      Grafana Nodes Dashboard

      Describing each dashboard and how to use it to visualize your cluster's metrics data goes beyond the scope of this tutorial. To learn more about the USE method for analyzing a system's performance, you can consult Brendan Gregg's The Utilization Saturation and Errors (USE) Method page. Google's SRE Book is another helpful resource, in particular Chapter 6: Monitoring Distributed Systems. To learn how to build your own Grafana dashboards, check out Grafana's Getting Started page.

      In the next step, we'll follow a similar process to connect to and explore the Prometheus monitoring system.

      Step 4 — Accessing Prometheus and Alertmanager

      To connect to the Prometheus Pods, we once again have to use kubectl port-forward to forward a local port. If you’re done exploring Grafana, you can close the port-forward tunnel by hitting CTRL-C. Alternatively you can open a new shell and port-forward connection.

      Begin by listing running Services in the monitoring namespace:

      • kubectl get svc -n monitoring

      You should see the following Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 34m doks-cluster-monitoring-grafana ClusterIP 10.245.105.130 <none> 80/TCP 34m doks-cluster-monitoring-kube-state-metrics ClusterIP 10.245.140.151 <none> 8080/TCP 34m doks-cluster-monitoring-pr-alertmanager ClusterIP 10.245.197.254 <none> 9093/TCP 34m doks-cluster-monitoring-pr-operator ClusterIP 10.245.14.163 <none> 8080/TCP 34m doks-cluster-monitoring-pr-prometheus ClusterIP 10.245.201.173 <none> 9090/TCP 34m doks-cluster-monitoring-prometheus-node-exporter ClusterIP 10.245.72.218 <none> 30206/TCP 34m prometheus-operated ClusterIP None <none> 9090/TCP 34m

      We are going to forward local port 9090 to port 9090 of the doks-cluster-monitoring-pr-prometheus Service:

      • kubectl port-forward -n monitoring svc/doks-cluster-monitoring-pr-prometheus 9090:9090

      You should see the following output:

      Output

      Forwarding from 127.0.0.1:9090 -> 9090 Forwarding from [::1]:9090 -> 9090

      This indicates that local port 9090 is being forwarded successfully to a Prometheus Pod.

      Visit http://localhost:9090 in your web browser. You should see the following Prometheus Graph page:

      Prometheus Graph Page

      From here you can use PromQL, the Prometheus query language, to select and aggregate time series metrics stored in its database. To learn more about PromQL, consult Querying Prometheus from the official Prometheus docs.

      In the Expression field, type machine_cpu_cores and hit Execute. You should see a list of time series with the metric machine_cpu_cores that reports the number of CPU cores on a given node. You can see which node generated the metric and which job scraped the metric in the metric labels.

      Finally, in the top navigation bar, click on Status and then Targets to see the list of targets Prometheus has been configured to scrape. You should see a list of targets corresponding to the list of monitoring endpoints described at the beginning of Step 2.

      To learn more about Promtheus and how to query your cluster metrics, consult the official Prometheus docs.

      We'll follow a similar process to connect to AlertManager, which manages Alerts generated by Prometheus. You can explore these Alerts by clicking into Alerts in the Prometheus top navigation bar.

      To connect to the Alertmanager Pods, we will once again use kubectl port-forward to forward a local port. If you’re done exploring Prometheus, you can close the port-forward tunnel by hitting CTRL-C. Alternatively you can open a new shell and port-forward connection.

      Begin by listing running Services in the monitoring namespace:

      • kubectl get svc -n monitoring

      You should see the following Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 34m doks-cluster-monitoring-grafana ClusterIP 10.245.105.130 <none> 80/TCP 34m doks-cluster-monitoring-kube-state-metrics ClusterIP 10.245.140.151 <none> 8080/TCP 34m doks-cluster-monitoring-pr-alertmanager ClusterIP 10.245.197.254 <none> 9093/TCP 34m doks-cluster-monitoring-pr-operator ClusterIP 10.245.14.163 <none> 8080/TCP 34m doks-cluster-monitoring-pr-prometheus ClusterIP 10.245.201.173 <none> 9090/TCP 34m doks-cluster-monitoring-prometheus-node-exporter ClusterIP 10.245.72.218 <none> 30206/TCP 34m prometheus-operated ClusterIP None <none> 9090/TCP 34m

      We are going to forward local port 9093 to port 9093 of the doks-cluster-monitoring-pr-alertmanager Service.

      • kubectl port-forward -n monitoring svc/doks-cluster-monitoring-pr-alertmanager 9093:9093

      You should see the following output:

      Output

      Forwarding from 127.0.0.1:9093 -> 9093 Forwarding from [::1]:9093 -> 9093

      This indicates that local port 9093 is being forwarded successfully to an Alertmanager Pod.

      Visit http://localhost:9093 in your web browser. You should see the following Alertmanager Alerts page:

      Alertmanager Alerts Page

      From here, you can explore firing alerts and optionally silencing them. To learn more about Alertmanager, consult the official Alertmanager documentation.

      Conclusion

      In this tutorial, you installed a Prometheus, Grafana, and Alertmanager monitoring stack into your DigitalOcean Kubernetes cluster with a standard set of dashboards, Prometheus rules, and alerts. Since this was done using Helm, you can use helm upgrade, helm rollback, and helm delete to upgrade, roll back, or delete the monitoring stack. To learn more about these functions, consult How To Install Software on Kubernetes Clusters with the Helm Package Manager.

      The prometheus-operator chart helps you get cluster monitoring up and running quickly using Helm. You may wish to build, deploy, and configure Prometheus Operator manually. To do so, consult the Prometheus Operator and kube-prometheus GitHub repos.



      Source link

      Monitoring Salt Minions with Beacons


      Updated by Linode Written by Linode

      Use promo code DOCS10 for $10 credit on a new account.

      Every action performed by Salt, such as applying a highstate or restarting a minion, generates an event. Beacons emit events for non-salt processes, such as system state changes or file changes. This guide will use Salt beacons to notify the Salt master of changes to minions, and Salt reactors to react to those changes.

      Before You Begin

      If you don’t already have a Salt master and minion, follow the first steps in our Getting Started with Salt – Basic Installation and Setup guide.

      Note

      The steps in this guide require root privileges. Be sure to run the steps below as root or with the sudo prefix. For more information on privileges, see our Users and Groups guide.

      Example 1: Preventing Configuration Drift

      Configuration drift occurs when there are untracked changes to a system configuration file. Salt can help prevent configuration drift by ensuring that a file is immediately reverted to a safe state upon change. In order to do this, we first have to let Salt manage the file. This section will use an NGINX configuration file as an example, but you can choose any file.

      Manage Your File

      1. On your Salt master, create a directory for your managed files in /srv/salt/files:

        mkdir /srv/salt/files
        
      2. On your Salt master, place your nginx.conf, or whichever file you would like to manage, in the /srv/salt/files folder.

      3. On your Salt master, create a state file to manage the NGINX configuration file:

        /srv/salt/nginx_conf.sls
        1
        2
        3
        4
        5
        
        /etc/nginx/nginx.conf:
          file.managed:
            - source:
              - salt://files/nginx.conf
            - makedirs: True

        There are two file paths in this .sls file. The first file path is the path to your managed file on your minion. The second, under source and prefixed with salt://, points to the file path on your master. salt:// is a convenience file path that maps to /srv/salt.

      4. On your Salt master, create a top file if it does not already exist and add your nginx_conf.sls:

        /srv/salt/top.sls
        1
        2
        3
        
        base:
          '*':
            - nginx_conf
      5. Apply a highstate from your Salt master to run the nginx_conf.sls state on your minions.

        salt '*' state.apply
        

      Create a Beacon

      1. In order to be notified when a file changes, you will need the Python pyinotify package. Create a Salt state that will handle installing the pyinotify package on your minions:

        /srv/salt/packages.sls
        1
        2
        3
        4
        5
        6
        7
        8
        
        python-pip:
          pkg.installed
        
        pyinotify:
          pip.installed:
            - require:
              - pkg: python-pip
                

        Note

        The inotify beacon only works on OSes that have inotify kernel support. Currently this excludes FreeBSD, macOS, and Windows.

      2. On the Salt master create a minion.d directory to store the beacon configuration file:

        mkdir /srv/salt/files/minion.d
        
      3. Now create a beacon that will emit an event every time the nginx.conf file changes on your minion. Create the /etc/salt/minion.d/beacons.conf file and add the following lines:

        /etc/salt/minion.d/beacons.conf
        1
        2
        3
        4
        5
        6
        7
        
        beacons:
          inotify:
            - files:
                /etc/nginx/nginx.conf:
                  mask:
                    - modify
            - disable_during_state_run: True
      4. To apply this beacon to your minions, create a new file.managed Salt state:

        /srv/salt/beacons.sls
        1
        2
        3
        4
        5
        6
        
        /etc/salt/minion.d/beacons.conf:
          file.managed:
            - source:
              - salt://files/minion.d/beacons.conf
            - makedirs: True
            
      5. Add the new packages and beacons states to your Salt master’s top file:

        /srv/salt/top.sls
        1
        2
        3
        4
        5
        
        base:
          '*':
            - nginx_conf
            - packages
            - beacons
      6. Apply a highstate from your Salt master to implement these changes on your minions:

        salt '*' state.apply
        
      7. Open another shell to your Salt master and start the Salt event runner. You will use this to monitor for file change events from your beacon.

        salt-run state.event pretty=True
        
      8. On your Salt minion, make a change to your nginx.conf file, and then check out your Salt event runner shell. You should see an event like the following:

          
        salt/beacon/salt-minion/inotify//etc/nginx/nginx.conf	{
            "_stamp": "2018-10-10T13:53:47.163499",
            "change": "IN_MODIFY",
            "id": "salt-minion",
            "path": "/etc/nginx/nginx.conf"
        }
        
        

        Note that the first line is the name of the event, and it includes your Salt minion name and the path to your managed file. We will use this event name in the next section.

      9. To revert the nginx.conf file to it’s initial state, you can apply a highstate from your Salt master.

        salt '*' state.apply nginx_conf
        

        Open your managed file on your Salt minion and notice that the change has been reverted. We will automate this last step in the next section.

      Create a Reactor

      1. On your Salt master, create the /srv/reactor directory:

        mkdir /srv/reactor
        
      2. Then create a reactor state file in the /srv/reactor directory and include the following:

        /srv/reactor/nginx_conf_reactor.sls
        1
        2
        3
        4
        5
        
        /etc/nginx/nginx.conf:
          local.state.apply:
            - tgt: {{ data['id'] }}
            - arg:
              - nginx_conf

        The file path in the first line is simply the name of the reactor, and can be whatever you choose. The tgt, or target, is the Salt minion that will receive the highstate. In this case, the information passed to the reactor from the beacon event is used to programmatically choose the right Salt minion ID. This information is available as the data dictionary. The arg, or argument, is the name of the Salt state file that was created to manage the nginx.conf file.

      3. On your Salt master, create a reactor.conf file and include the new reactor state file:

        /etc/salt/master.d/reactor.conf
        1
        2
        3
        
        reactor:
          - 'salt/beacon/*/inotify//etc/nginx/nginx.conf':
            - /srv/reactor/nginx_conf_reactor.sls

        This reactor.conf file is essentially a list of event names matched to reactor state files. In this example we’ve used a glob (*) in the event name instead of specifying a specific minion ID, (which means that any change to a nginx.confon any minion will trigger the reactor), but you might find a specific minion ID better suits your needs.

      4. Restart the salt-master service to apply the reactor.conf file:

        systemctl restart salt-master
        
      5. On your Salt minion, make a change to the nginx.conf file. Then check out your event runner shell and you should see a number of events. Then, check your nginx.conf file. The changes you made should have automatically been reverted.

      Congratulations, you now know how to manage configuration drift with Salt. All future updates to nginx.conf should be made on the Salt master and applied using state.apply.

      Example 2: Monitoring Minion Memory Usage with Slack

      Salt comes with a number of system monitoring beacons. In this example we will monitor a minion’s memory usage and send a Slack notification when the memory usage has passed a certain threshold. For this section you will need to create a Slack bot, obtain an OAuth token, and configure the bot to be able to send Slack messages on your behalf.

      Configure Your Slack App

      1. Create a Slack app.

      2. From the Slack app settings page, navigate to OAuth & Permissions.

      3. Copy down the OAuth Access Token.

      4. Under Scopes, select Send Messages As < your app name >.

      Create a Beacon

      1. On your Salt master, open or create the /srv/salt/files/minion.d/beacons.conf file and add the following lines. If you already have a beacons.conf file from the previous example, leave out the beacons: line, but ensure that rest of the configuration is indented two spaces:

        /srv/salt/files/minion.d/beacons.conf
        1
        2
        3
        4
        5
        
        beacons:
          memusage:
            beacon.present:
              - percent: 15%
              - interval: 15

        In this example we’ve left the memory usage percentage low to ensure the beacon event will fire, and the event interval set to 15 seconds. In a production environment you should change these to more sane values.

      2. Apply a highstate from your Salt master to add the beacon to your minions:

        salt '*' state.apply
        
      3. If you haven’t already, open another shell into your Salt master and start the event runner:

        salt-run state.event pretty=True
        
      4. After a few seconds, assuming you’ve set the memory percentage low enough, you should see an event like the following:

          
        salt/beacon/salt-minion/memusage/	{
            "_stamp": "2018-10-10T15:48:53.165368",
            "id": "salt-minion",
            "memusage": 20.7
        }
        
        

        Note that the first line is the name of the event, and contains the minion name. We will use this event name in the next section.

      Create a Reactor

      1. On your Salt master, create the /srv/reactor directory if you have not already done so:

        mkdir /srv/reactor
        
      2. Then create a reactor state file and add the following lines, making sure to change the channel, api_key, and from_name keys to reflect your desired values. The api_key is the OAuth token you copied down in step 3 of the Configure Your Slack App section:

        /srv/reactor/memusage.sls
        1
        2
        3
        4
        5
        6
        7
        8
        
        Send memusage to Slack:
          local.slack.post_message:
            - tgt: {{ data['id'] }}
            - kwarg:
                channel: "#general"
                api_key: "xoxp-451607817121-453578458246..."
                message: "{{ data['id'] }} has hit a memory usage threshold: {{ data['memusage'] }}%."
                from_name: "Memusage Bot"

        We’re using the data dictionary provided to the reactor from the memusage event to populate the minion ID and the memory usage.

      3. Open or create the reactor.conf file. If you already have a reactor.conf file from the previous example, leave out the reactor: line, but ensure that rest of the configuration is indented two spaces:

        /etc/salt/master.d/reactor.conf
        1
        2
        3
        
        reactor:
          - 'salt/beacon/*/memusage/':
            - '/srv/reactor/memusage.sls'

        In this example we’ve used a glob (*) in the event name instead of specifying a specific minion ID, (which means that any memusage event will trigger the reactor), but you might find a specific minion ID better suits your needs.

      4. Restart salt-master to apply the reactor.conf:

        systemctl restart salt-master
        
      5. In your event-runner shell, after a few seconds, you should see an event like the following:

          
        salt/job/20181010161053393111/ret/salt-minion	{
            "_stamp": "2018-10-10T16:10:53.571956",
            "cmd": "_return",
            "fun": "slack.post_message",
            "fun_args": [
                {
                    "api_key": "xoxp-451607817121-453578458246-452348335312-2328ce145e5c0c724c3a8bc2afafee17",
                    "channel": "#general",
                    "from_name": "Memusage Bot",
                    "message": "salt-minion has hit a memory usage threshold: 20.7."
                }
            ],
            "id": "salt-minion",
            "jid": "20181010161053393111",
            "retcode": 0,
            "return": true,
            "success": true
        }
        
        
      6. Open Slack and you should see that your app has notified the room.

      Congratulations, you now know how to monitor your Salt minion’s memory usage with Slack integration. Salt can also monitor CPU load, disk usage, and a number of other things. Refer to the More Information section below for additional resources.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link