One place for hosting & domains

      Linode

      What is Linode Longview


      Updated by Linode

      Contributed by
      Linode

      Our guide to installing and using Linode Longview.

      Longview is Linode’s system data graphing service. It tracks metrics for CPU, memory, and network bandwidth, both aggregate and per-process, and it provides real-time graphs that can help expose performance problems.

      The Longview client is open source and provides an agent that can be installed on any Linux distribution–including systems not hosted by Linode. However, Linode only offers technical support for CentOS, Debian, and Ubuntu.

      Note

      Longview for Cloud Manager is still being actively developed to reach parity with Linode’s Classic Manager. This guide will be updated as development work continues. See the Cloud Manager’s changelog for the latest information on Cloud Manager releases.

      Note

      Longview does not currently support CentOS 8.

      In this Guide:

      This guide provides an overview of Linode Longview. You will learn how to:

      Before you Begin

      • In order to monitor and visualize a Linode’s system statistics, you will need to install the Longview agent on your Linode. Have your Linode’s IP address available in order to SSH into the machine and install the Longview agent.

      Install Linode Longview

      In this section, you will create a Longview Client instance in the Linode Cloud Manager and then install the Longview agent on an existing Linode. These steps will enable you to gather and visualize important system statistics for the corresponding Linode.

      Add the Longview Client

      1. Log into the Linode Cloud Manager and click on the Longview link in the sidebar.

      2. Viewing the Longview Clients page, click on the Add a Client link on the top right-hand corner of the page. This will create a Longview Client instance.

        Linode Cloud Manager Longview Clients Page

      3. An entry will appear displaying your Longview Client instance along with its auto-generated label, its current status, installation instructions, and API key. Its status will display as Waiting for data, since you have not yet installed the Longview agent on a running Linode.

        Note

        The displayed curl command will be used in the next section to install the Longview agent on the desired Linode. The long string appended to the url https://lv.linode.com/ is your Longview Client instance’s GUID (globally unique identifier).

        Linode Cloud Manager Longview Clients list

      Install the Longview Agent

      1. Install the Longview agent on the Linode whose system you’d like to monitor and visualize. Open a terminal on your local computer and log into your Linode over SSH. Replace the IP address with your own Linode’s IP address.

        ssh [email protected]
        
      2. Change to the root user.

        su - root
        
      3. Switch back to the Linode Cloud Manager in your browser, copy the Longview Client instance’s curl command, and paste it into your Terminal window. Press Enter to execute the command. The installation will take a few minutes to complete.

        Note

        Ensure you replace the example curl command below with your own Longview Client instance’s GUID.

        curl -s https://lv.linode.com/05AC7F6F-3B10-4039-9DEE09B0CC382A3D | sudo bash
        
      4. Once the installation is complete, verify that the Longview agent is running:

        sudo systemctl status longview
        

        You should see a similar output:

        Centos:

          
            ● longview.service - SYSV: Longview statistics gathering
              Loaded: loaded (/etc/rc.d/init.d/longview; bad; vendor preset: disabled)
              Active: active (running) since Tue 2019-12-10 22:35:11 UTC; 40s ago
                Docs: man:systemd-sysv-generator(8)
              CGroup: /system.slice/longview.service
                      └─12202 linode-longview
        
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Starting SYSV: Longview statistics gathering...
        Dec 10 22:35:11 li322-60.members.linode.com longview[12198]: Starting longview: [  OK  ]
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Started SYSV: Longview statistics gathering.
        
        

        Debian or Ubuntu:

          
        ● longview.service - LSB: Longview Monitoring Agent
           Loaded: loaded (/etc/init.d/longview; generated; vendor preset: enabled)
           Active: active (running) since Mon 2019-12-09 21:55:39 UTC; 2s ago
             Docs: man:systemd-sysv-generator(8)
          Process: 2997 ExecStart=/etc/init.d/longview start (code=exited, status=0/SUCCESS)
            Tasks: 1 (limit: 4915)
           CGroup: /system.slice/longview.service
                   └─3001 linode-longview
               
        

        If the Longview agent is not running, start it with the following command:

        sudo systemctl start longview
        

        Your output should resemble the example output above.

      5. Switch back to the Linode Cloud Manager’s Longview Clients page in your browser and observe your Longview client’s quick view metrics and graph.

        Note

        It can take several minutes for data to load and display in the Cloud Manager but once it does, you’ll see the graphs and charts populating with your Linode’s metrics.

        Linode Cloud Manager Longview Clients Overview metrics

      Manually Install the Longview Agent with yum or apt

      It’s also possible to manually install Longview for CentOS, Debian, and Ubuntu. You should only need to manually install it if the instructions in the previous section failed.

      1. Before completing the steps below, ensure you have added a Longview Client instance using the Cloud Manager.

      2. Add a configuration file to store the repository information for the Longview agent:

        CentOS:

        Using the text editor of your choice, like nano, create a .repo file and copy the contents of the example file below. Replace REV in the repository URL with your CentOS version (e.g., 8). If unsure, you can find your CentOS version number with cat /etc/redhat-release.

        /etc/yum.repos.d/longview.repo
        1
        2
        3
        4
        5
        
        [longview]
        name=Longview Repo
        baseurl=https://yum-longview.linode.com/centos/REV/noarch/
        enabled=1
        gpgcheck=1

        Debian or Ubuntu:

        Find the codename of the distribution running on your Linode.

        [email protected]:~# lsb_release -sc
        stretch
        

        Using the text editor of your choice, like nano, create a custom sources file that includes Longview’s Debian repository and the Debian distribution codename. In the command below, replace stretch with the output of the previous step.

        /etc/apt/sources.list.d/longview.list
        1
        
        deb http://apt-longview.linode.com/ stretch main
      3. Download the repository’s GPG key and import or move it to the correct location:

        Centos:

        sudo curl -O https://yum-longview.linode.com/linode.key
        sudo rpm --import linode.key
        

        Debian or Ubuntu:

        sudo curl -O https://apt-longview.linode.com/linode.gpg
        sudo mv linode.gpg /etc/apt/trusted.gpg.d/linode.gpg
        
      4. Create a directory for the API key:

        sudo mkdir /etc/linode/
        
      5. Copy the API key from the Installation tab of your Longview client’s detailed view in the Linode Cloud Manager. Put the key into a file, replacing the key in the command below with your own.

        echo '266096EE-CDBA-0EBB-23D067749E27B9ED' | sudo tee /etc/linode/longview.key
        
      6. Install Longview:

        CentOS:

        sudo yum install linode-longview
        

        Debian or Ubuntu:

        sudo apt-get update
        sudo apt-get install linode-longview
        
      7. Once the installation is complete, verify that the Longview agent is running:

        sudo systemctl status longview
        

        You should see a similar output:

        CentOS:

          
        ● longview.service - SYSV: Longview statistics gathering
           Loaded: loaded (/etc/rc.d/init.d/longview; bad; vendor preset: disabled)
           Active: active (running) since Tue 2019-12-10 22:35:11 UTC; 40s ago
             Docs: man:systemd-sysv-generator(8)
           CGroup: /system.slice/longview.service
                   └─12202 linode-longview
        
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Starting SYSV: Longview statistics gathering...
        Dec 10 22:35:11 li322-60.members.linode.com longview[12198]: Starting longview: [  OK  ]
        Dec 10 22:35:11 li322-60.members.linode.com systemd[1]: Started SYSV: Longview statistics gathering.
            
        

        Debian or Ubuntu:

          
        ● longview.service - LSB: Longview Monitoring Agent
           Loaded: loaded (/etc/init.d/longview; generated; vendor preset: enabled)
           Active: active (running) since Mon 2019-12-09 21:55:39 UTC; 2s ago
             Docs: man:systemd-sysv-generator(8)
          Process: 2997 ExecStart=/etc/init.d/longview start (code=exited, status=0/SUCCESS)
            Tasks: 1 (limit: 4915)
           CGroup: /system.slice/longview.service
                   └─3001 linode-longview
              
        

        If the Longview client is not running, start it with the following command:

        sudo systemctl start longview
        

        Your output should resemble the example output above.

      8. Switch back to the Linode Cloud Manager’s Longview Clients page in your browser and observe your Longview client’s quick view metrics and graph.

        Note

        It can take several minutes for data to load and display in the Cloud Manager but once it does, you’ll see the graphs and charts populating with your Linode’s metrics.

        Linode Cloud Manager Longview Clients Overview metrics

      Longview’s Data Explained

      This section will provide an overview of the data and graphs available to you in the Longview Client’s detailed view.

      Access your Longview Client’s Detailed View

      1. To view a Longview Client’s detailed graphs and metrics, log into the Linode Cloud Manager and click on the Longview link in the sidebar.

        Access Longview in the Cloud Manager

      2. Viewing the Longview Clients listing page, click on the View Details button corresponding to the client whose Linode’s system statistics you’d like to view.

        View details for your Longview Client instance

      3. You will be brought to your Longview client’s Overview tab where you can view all the data and graphs corresponding to your Linode.

        To learn more about the Data available in a Longview Client’s Overview page, see the Overview section.

      4. From here you can click on any of your Longview Client instance’s tabs to view more related information.

        Note

        If your Linode has NGINX, Apache, or MySQL installed you will see a corresponding tab appear containing related system data.

      Overview

      The Overview tab shows all of your system’s most important statistics in one place. You can hover your cursor over any of your graphs in the Resource Allocation History section to view details about specific data points.

      Cloud Manager Longview Client's overview page

      1. Basic information about the system, including the operating system name and version, processor speed, uptime, and available updates. This area also includes your system’s top active processes.
      2. The time resolution for the graphs displayed in the Resource Allocation History section. The available options are Past 30 Minutes and Past 12 hours.
      3. Percentage of CPU time spent in wait (on disk), in user space, and in kernel space.
      4. Total amount of RAM being used, as well as the amount of memory in cache, in buffers, and in swap.
      5. Amount of network data that has been transferred to and from your system.
      6. Disk I/O. This is the amount of data being read from, or written to, the system’s disk storage.
      7. Average CPU load.
      8. Listening network services along with their related process, owner, protocol, port, and IP.
      9. A list of current active connections to the Linode.

      Installation

      The Installation tab provides quick instructions on how to install the Longview agent on your Linode and also displays the Longview client instance’s API key.

      Longview's installation tab

      Longview Plan Details

      Longview Free updates every 5 minutes and provides twelve hours of data history. Longview Pro gives you data resolution at 60 second intervals, and you can view a complete history of your Linode’s data instead of only the previous 30 minutes.

      Note

      Longview Pro is not yet available in the Linode Cloud Manager. Longview for Cloud Manager is still being actively developed to reach parity with Linode’s Classic Manager. This guide will be updated as development work continues. See the Cloud Manager’s changelog for the latest information on Cloud Manager releases.

      Troubleshooting

      If you’re experiencing problems with the Longview client, follow these steps to help determine the cause.

      Basic Diagnostics

      Ensure that:

      1. Your system is fully updated.

        Note

        Longview requires Perl 5.8 or later.

      2. The Longview client is running. You can verify with one of the two commands below, depending on your distribution’s initialization system:

        CentOS, Debian, and Ubuntu

        sudo systemctl status longview   # For distributions with systemd.
        

        Other Distributions

        sudo service longview status     # For distributions without systemd.
        

        If the Longview client is not running, start it with one of the following commands, depending on your distribution’s init system:

        CentOS, Debian, and Ubuntu

        sudo systemctl start longview
        

        Other Distributions

        sudo service longview start
        

        If the service fails to start, check Longview’s log for errors. The log file is located in /var/log/linode/longview.log.

      Debug Mode

      Restart the Longview client in debug mode for increased logging verbosity.

      1. First stop the Longview client:

        CentOS, Debian, and Ubuntu

        sudo systemctl stop longview   # For distributions with systemd.
        

        Other Distributions

        sudo service longview stop     # For distributions without systemd.
        
      2. Then restart Longview with the debug flag:

        sudo /etc/init.d/longview debug
        
      3. When you’re finished collecting information, repeat the first two steps to stop Longview and restart it again without the debug flag.

        If Longview does not close properly, find the process ID and kill the process:

        ps aux | grep longview
        sudo kill $PID
        

      Firewall Rules

      If your Linode has a firewall, it must allow communication with Longview’s aggregation host at longview.linode.com (IPv4: 96.126.119.66). You can view your firewall rules with one of the commands below, depending on the firewall controller used by your Linux distribution:

      firewalld

      sudo firewall-cmd --list-all
      

      Note

      iptables

      sudo iptables -S
      

      Note

      ufw

      sudo ufw show added
      

      Note

      If the output of those commands show no rules for the Longview domain (or for 96.126.119.66, which is the IP for the Longview domain), you must add them. A sample iptables rule that allows outbound HTTPS traffic to Longview would be the following:

      iptables -A OUTPUT -p tcp --dport 443 -d longview.linode.com -j ACCEPT
      

      Note

      If you use iptables, you should also make sure to persist any of your firewall rule changes. Otherwise, your changes will not be enforced if your Linode is rebooted. Review the iptables-persistent section of our iptables guide for help with this.

      Verify API key

      The API key given in the Linode Cloud Manager should match that on your system in /etc/linode/longview.key.

      1. In the Linode Cloud Manager, the API key is located in the Installation tab of your Longview Client instance’s detailed view.

      2. SSH into your Linode. The Longview key is located at /etc/linode/longview.key. Use cat to view the contents of that file and compare it to what’s shown in the Linode Cloud Manager:

        cat /etc/linode/longview.key
        

        The two should be the same. If they are not, paste the key from the Linode Cloud Manager into longview.key, overwriting anything already there.

      Cloned Keys

      If you clone a Linode which has Longview installed, you may encounter the following error:

        
      Multiple clients appear to be posting data with this API key. Please check your clients' configuration.
      
      

      This is caused by both Linodes posting data using the same Longview key. To resolve it:

      1. Uninstall the Longview agent on the cloned system.

        CentOS:

        sudo yum remove linode-longview
        

        Debian or Ubuntu:

        sudo apt-get remove linode-longview
        

        Other Distributions:

        sudo rm -rf /opt/linode/longview
        
      2. Add a new Linode Longview Client instance. This will create a new Longview API key independent from the system which it was cloned from.

        Note

        The GUID provided in the Longview Client’s installation URL is not the same as the Longview API key.

      3. Install the Longview Agent on the cloned Linode.

      Contact Support

      If you still need assistance after performing these checks, please open a support ticket.

      Uninstall the Longview Client

      1. Log into the Linode Cloud Manager and click on the Longview link in the sidebar.

        Access Longview in the Cloud Manager

      2. Click the ellipsis button corresponding to the Longview Client instance you’d like to remove and select delete.

        Delete your Longview Client

      3. Next, remove the Longview agent from the operating system you want to stop monitoring. SSH into your Linode.

        ssh [email protected]
        
      4. Remove the linode-longview package with the command appropriate for your Linux distribution.

        CentOS:

        sudo yum remove linode-longview
        

        Debian or Ubuntu:

        sudo apt-get remove linode-longview
        

        Other Distributions:

        sudo rm -rf /opt/linode/longview
        

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Deploy NodeBalancers with the Linode Cloud Controller Manager


      Updated by Linode Written by Linode Community

      The Linode Cloud Controller Manager (CCM) allows Kubernetes to deploy Linode NodeBalancers whenever a Service of the “LoadBalancer” type is created. This provides the Kubernetes cluster with a reliable way of exposing resources to the public internet. The CCM handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, the NodeBalancer will service.

      This guide will explain how to:

      • Create a service with the type “LoadBalancer.”
      • Use annotations to control the functionality of the NodeBalancer.
      • Use the NodeBalancer to terminate TLS encryption.

      Caution

      Using the Linode Cloud Controller Manager to create NodeBalancers will create billable resources on your Linode account. A NodeBalancer costs $10 a month. Be sure to follow the instructions at the end of the guide if you would like to delete these resources from your account.

      Before You Begin

      You should have a working knowledge of Kubernetes and familiarity with the kubcetl command line tool before attempting the instructions found in this guide. For more information about Kubernetes, consult our Kubernetes Beginner’s Guide and our Getting Started with Kubernetes guide.

      When using the CCM for the first time, it’s highly suggested that you create a new Kubernetes cluster, as there are a number of issues that prevent the CCM from running on Nodes that are in the “Ready” state. For a completely automated install, you can use the Linode CLI’s k8s-alpha command line tool. The Linode CLI’s k8s-alpha command line tool utilizes Terraform to fully bootstrap a Kubernetes cluster on Linode. It includes the Linode Container Storage Interface (CSI) Driver plugin, the Linode CCM plugin, and the ExternalDNS plugin. For more information on creating a Kubernetes cluster with the Linode CLI, review our How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide.

      Note

      To manually add the Linode CCM to your cluster, you must start kubelet with the --cloud-provider=external flag. kube-apiserver and kube-controller-manager must NOT supply the --cloud-provider flag. For more information, visit the upstream Cloud Controller documentation.

      If you’d like to add the CCM to a cluster by hand, and you are using macOS, you can use the generate-manifest.sh file in the deploy folder of the CCM repository to generate a CCM manifest file that you can later apply to your cluster. Use the following command:

      ./generate-manifest.sh $LINODE_API_TOKEN us-east
      

      Be sure to replace $LINODE_API_TOKEN with a valid Linode API token, and replace us-east with the region of your choosing.

      To view a list of regions, you can use the Linode CLI, or you can view the Regions API endpoint.

      If you are not using macOS, you can copy the ccm-linode-template.yaml file and change the values of the data.apiToken and data.region fields manually.

      Using the CCM

      To use the CCM, you must have a collection of Pods that need to be load balanced, usually from a Deployment. For this example, you will create a Deployment that deploys three NGINX Pods, and then create a Service to expose those Pods to the internet using the Linode CCM.

      1. Create a Deployment manifest describing the desired state of the three replica NGINX containers:

        nginx-deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx-deployment
          labels:
            app: nginx
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
      2. Use the create command to apply the manifest:

        kubectl create -f nginx-deployment.yaml
        
      3. Create a Service for the Deployment:

        nginx-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx-service
          annotations:
            service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          labels:
            app: nginx
        spec:
          type: LoadBalancer
          ports:
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
          selector:
            app: nginx
          sessionAffinity: None

        The above Service manifest includes a few key concepts.

        • The first is the spec.type of LoadBalancer. This LoadBalancer type is responsible for telling the Linode CCM to create a Linode NodeBalancer, and will provide the Deployment it services a public facing IP address with which to access the NGINX Pods.
        • There is additional information being passed to the CCM in the form of metadata annotations (service.beta.kubernetes.io/linode-loadbalancer-throttle in the example above), which are discussed in the next section.
      4. Use the create command to create the Service, and in turn, the NodeBalancer:

        kubectl create -f nginx-service.yaml
        

      You can log in to the Linode Cloud Manager to view your newly created NodeBalancer.

      Annotations

      There are a number of settings, called annotations, that you can use to further customize the functionality of your NodeBalancer. Each annotation should be included in the annotations section of the Service manifest file’s metadata, and all of the annotations are prefixed with service.beta.kubernetes.io/linode-loadbalancer-.

      Annotation (suffix) Values Default Value Description
      throttle 020 (0 disables the throttle) 20 Client Connection Throttle. This limits the number of new connections-per-second from the same client IP.
      protocol tcp, http, https tcp Specifies the protocol for the NodeBalancer.
      tls Example value: [ { "tls-secret-name": "prod-app-tls", "port": 443} ] None A JSON array (formatted as a string) that specifies which ports use TLS and their corresponding secrets. The secret type should be kubernetes.io/tls. Fore more information, see the TLS Encryption section.
      check-type none, connection, http, http_body None The type of health check to perform on Nodes to ensure that they are serving requests. connection checks for a valid TCP handshake, http checks for a 2xx or 3xx response code, http_body checks for a certain string within the response body of the healthcheck URL.
      check-path string None The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-body string None The text that must be present in the body of the page used for health checks. For use with a check-type of http_body.
      check-interval integer None The duration, in seconds, between health checks.
      check-timeout integer (a value between 130) None Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attempts integer (a value between 130) None Number of health checks to perform before removing a back-end Node from service.
      check-passive boolean false When true, 5xx status codes will cause the health check to fail.

      To learn more about checks, please see our reference guide to NodeBalancer health checks.

      TLS Encryption

      This section will describe how to set up TLS termination for a Service so that the Service can be accessed over https.

      Generating a TLS type Secret

      Kubernetes allows you to store secret information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type of tls. Follow the next steps to create a valid tls type Secret:

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.crt -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. To create the secret, you can issue the create secret tls command, being sure to substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --key key.pem --cert cert.crt
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Defining TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port through the proper annotations. Take the following code snippet as an example:

      nginx-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
      ...

      The linode-loadbalancer-protocol annotation identifies the https protocol. Then, the linode-loadbalancer-tls annotation defines which Secret and port to use for serving https traffic. If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      nginx-service-two-environments.yaml
      1
      2
      3
      4
      
      ...
          service.beta.kubernetes.io/linode-loadbalancer-tls: |
            [ { "tls-secret-name": "my-secret", "port": 443 }. {"tls-secret-name": "my-secret-staging", "port": 8443} ]'
      ...

      Next, you’ll need to set up your Service to expose the https port. The whole example might look like the following:

      nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
        labels:
          app: nginx
        name: nginx-service
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        selector:
          app: nginx
        type: LoadBalancer

      Note that here the NodeBalancer created by the Service is terminating the TLS encryption and proxying that to port 80 on the NGINX Pod. If you had a Pod that listened on port 443, you would set the targetPort to that value.

      Session Affinity

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To ensure that traffic is directed to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod:

      session-affinity.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-service
        labels:
          app: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      You can set the timeout for the session by using the spec.sessionAffinityConfig.clientIP.timeoutSeconds field.

      Troubleshooting

      If you are having problems with the CCM, such as the NodeBalancer not being created, you can check the CCM’s error logs. First, you’ll need to find the name of the CCM Pod in the kube-system namespaces:

      kubectl get pods -n kube-system
      

      The Pod will be named ccm-linode- with five random characters at the end, like ccm-linode-jrvj2. Once you have the Pod name, you can view its logs. The --tail=n flag is used to return the last n lines, where n is the number of your choosing. The below example returns the last 100 lines:

      kubectl logs ccm-linode-jrvj2 -n kube-system --tail=100
      

      Note

      Currently the CCM only supports https ports within a manifest’s spec when the linode-loadbalancer-protocol is set to https. For regular http traffic, you’ll need to create an additional Service and NodeBalancer. For example, if you had the following in the Service manifest:

      unsupported-nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      ...
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      ...

      The NodeBalancer would not be created and you would find an error similar to the following in your logs:

      ERROR: logging before flag.Parse: E0708 16:57:19.999318       1 service_controller.go:219] error processing service default/nginx-service (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      ERROR: logging before flag.Parse: I0708 16:57:19.999466       1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"5d1afc22-a1a1-11e9-ad5d-f23c919aa99b", APIVersion:"v1", ResourceVersion:"1248179", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      

      Removing the http port would allow you to create the NodeBalancer.

      Delete a NodeBalancer

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f nginx-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service nginx-service
      

      Updating the CCM

      The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so, you can run the edit command.

      kubectl edit ds -n kube-system ccm-linode
      

      The CCM Daemonset manifest will appear in vim. Press i to enter insert mode. Navigate to spec.template.spec.image and change the field’s value to the desired version tag. For instance, if you had the following image:

      image: linode/linode-cloud-controller-manager:v0.2.2
      

      You could update the image to v0.2.3 by changing the image tag:

      image: linode/linode-cloud-controller-manager:v0.2.3
      

      For a complete list of CCM version tags, visit the CCM DockerHub page.

      Caution

      The CCM Daemonset manifest may list latest as the image version tag. This may or may not be pointed at the latest version. To ensure the latest version, it is recommended to first check the CCM DockerHub page, then use the most recent release.

      Press escape to exit insert mode, then type :wq and press enter to save your changes. A new Pod will be created with the new image, and the old Pod will be deleted.

      Next Steps

      To further take advantage of Linode products through Kubernetes, check out our guide on how to use the Linode Container Storage Interface (CSI), which allows you to create persistent volumes backed by Linode Block Storage.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Use the Linode Packer Builder


      Updated by Linode Contributed by Linode

      What is Packer?

      Packer is a HashiCorp maintained open source tool that is used to create machine images. A machine image provides the operating system, applications, application configurations, and data files that a virtual machine instance will run once it’s deployed. Using a single source configuration, you can generate identical machine images. Packer can be used in conjunction with common configuration management tools like Chef, Puppet, or Ansible to install software to your Linode and include those configurations into your image.

      In this guide you will complete the following steps:

      Before You Begin

      1. Ensure you have access to cURL on your computer.

      2. Generate a Linode API v4 access token with permission to read and write Linodes. You can follow the Get an Access Token section of the Getting Started with the Linode API guide if you do not already have one.

        Note

        The example cURL commands in this guide will refer to a $TOKEN environment variable. For example:

        curl -H "Authorization: Bearer $TOKEN" 
            https://api.linode.com/v4/images
        

        To set this variable up in your terminal, run:

        export TOKEN='<your-Linode-APIv4-token>'
        

        If you do not do this, you will need to alter these commands so that your API token is inserted wherever $TOKEN appears.

      3. Create an SSH authentication key-pair if your computer does not already have one. Your SSH public key will be added to your image via an Ansible module.

      4. Install Ansible on your computer and familiarize yourself with basic Ansible concepts (optional). Using the Getting Started With Ansible – Basic Installation and Setup guide, follow the steps in the Install Ansible section.

      The Linode Packer Builder

      In Packer’s ecosystem, builders are responsible for deploying machine instances and generating redeployable images from them. The Linode Packer builder can be used to create a Linode image that can be redeployed to other Linodes. You can share your image template across your team to ensure everyone is using a uniform development and testing environment. This process will help your team maintain an immutable infrastructure within your continuous delivery pipeline.

      The Linode Packer builder works in the following way:

      • You create a template to define the type of image you want Packer to build.
      • Packer uses the template to build the image on a temporary Linode.
      • A snapshot of the built image is taken and stored as a private Linode image.
      • The temporary Linode is deleted.
      • You can then reuse the private Linode image as desired, for example, by using your image to create Linode instances with Terraform.

      Install Packer

      The following instructions will install Packer on Ubuntu 18.04 from a downloaded binary. For more installation methods, including installing on other operating systems or compiling from source, see Packer’s official documentation.

      1. Make a Packer project directory in your home directory and then navigate to it:

        mkdir ~/packer
        cd ~/packer
        
      2. Download the precompiled binary for your system from the Packer website. Example wget commands are listed using the latest version available at time of publishing (1.4.4). You should inspect the links on the download page to see if a newer version is available and update the wget commands to use those URLs instead:

        • The 64-bit Linux .zip archive

          wget https://releases.hashicorp.com/packer/1.4.4/packer_1.4.4_linux_amd64.zip
          
        • The SHA256 checksums file

          wget https://releases.hashicorp.com/packer/1.4.4/packer_1.4.4_SHA256SUMS
          
        • The checksum signature file

          wget https://releases.hashicorp.com/packer/1.4.4/packer_1.4.4_SHA256SUMS.sig
          

      Verify the Download

      1. Import the HashiCorp Security GPG key (listed on the HashiCorp Security page under Secure Communications):

        gpg --recv-keys 51852D87348FFC4C
        

        The output should show that the key was imported:

          
        gpg: keybox '/home/user/.gnupg/pubring.kbx' created
        gpg: key 51852D87348FFC4C: 17 signatures not checked due to missing keys
        gpg: /home/user/.gnupg/trustdb.gpg: trustdb created
        gpg: key 51852D87348FFC4C: public key "HashiCorp Security " imported
        gpg: no ultimately trusted keys found
        gpg: Total number processed: 1
        gpg:               imported: 1
        
        
      2. Verify the checksum file’s GPG signature:

        gpg --verify packer*.sig packer*SHA256SUMS
        

        The output should contain the Good signature from "HashiCorp Security <[email protected]>" confirmation message:

          
        gpg: Signature made Tue 01 Oct 2019 06:30:17 PM UTC
        gpg:                using RSA key 91A6E7F85D05C65630BEF18951852D87348FFC4C
        gpg: Good signature from "HashiCorp Security " [unknown]
        gpg: WARNING: This key is not certified with a trusted signature!
        gpg:          There is no indication that the signature belongs to the owner.
        Primary key fingerprint: 91A6 E7F8 5D05 C656 30BE  F189 5185 2D87 348F FC4C
              
        
      3. Verify that the fingerprint output matches the fingerprint listed in the Secure Communications section of the HashiCorp Security page.

      4. Verify the .zip archive’s checksum:

        sha256sum -c packer*SHA256SUMS 2>&1 | grep OK
        

        The output should show the file’s name as given in the packer*SHA256SUMS file:

          
        packer_1.4.4_linux_amd64.zip: OK
              
        

      Configure the Packer Environment

      1. Unzip packer_*_linux_amd64.zip to your ~/packer directory:

        unzip packer_*_linux_amd64.zip
        

        Note

        If you receive an error that indicates unzip is missing from your system, install the unzip package and try again.

      2. Edit your ~./profile shell configuration file to include the ~/packer directory in your PATH. Then, reload the Bash profile:

        echo 'export PATH="$PATH:$HOME/packer"' >> ~/.profile
        source ~/.profile
        

        Note

        If you use a different shell, your shell configuration may have a different file name.

      3. Verify Packer can run by calling it with no options or arguments:

        packer
        
          
        Usage: packer [--version] [--help]  []
        
        Available commands are:
            build       build image(s) from template
            console     creates a console for testing variable interpolation
            fix         fixes templates from old versions of packer
            inspect     see components of a template
            validate    check that a template is valid
            version     Prints the Packer version
            
        

      Use the Linode Packer Builder

      Now that Packer is installed on your local system, you can create a Packer template. A template is a JSON formatted file that contains the configurations needed to build a machine image.

      In this section you will create a template that uses the Linode Packer builder to create an image using Debian 9 as its base distribution. The template will also configure your system image with a new limited user account, and a public SSH key from your local computer. The additional system configuration will be completed using Packer’s Ansible provisioner and an example Ansible Playbook. A Packer provisioner is a built-in third-party integration that further configures a machine instance during the boot process and prior to taking the machine’s snapshot.

      Note

      The steps in this section will incur charges related to deploying a 1GB Nanode. The Linode will only be deployed for the duration of the time needed to create and snapshot your image and will then be deleted. See our Billing and Payments guide for details about hourly billing.

      Access Linode and Private Images

      The Linode Packer Builder requires a Linode Image ID to deploy a disk from. This guide’s example will use the image linode/debian9, but you can replace it with any other image you prefer. To list the official Linode images and your account’s private images, you can curl the Linode API:

      curl -H "Authorization: Bearer $TOKEN" 
          https://api.linode.com/v4/images
      

      Create Your Template

      Note

      The Packer builder does not manage images. Once it creates an image, it will be stored on your Linode account and can be accessed and used as needed from the Linode Cloud Manager, via Linode’s API v4, or using third-party tools like Terraform. Linode Images are limited to 2GB per Image and 3 Images per account.

      Create a file named example.json with the following content:

      ~/packer/example.json
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      
      {
        "variables": {
          "my_linode_token": ""
        },
        "builders": [{
          "type": "linode",
          "image": "linode/debian9",
          "linode_token": "{{user `my_linode_token` }}",
          "region": "us-east",
          "instance_type": "g6-nanode-1",
          "instance_label": "temp-linode-packer",
          "image_label": "my-private-packer-image",
          "image_description": "My private packer image",
          "ssh_username": "root"
        }],
        "provisioners": [
          {
            "type": "ansible",
            "playbook_file": "./limited_user_account.yml"
          }
        ]
      }

      If you would rather not use a provisioner in your Packer template, you can use the example file below:

      ~/packer/example.json
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      
      {
        "variables": {
          "my_linode_token": ""
        },
        "builders": [{
          "type": "linode",
          "image": "linode/debian9",
          "linode_token": "{{user `my_linode_token` }}",
          "region": "us-east",
          "instance_type": "g6-nanode-1",
          "instance_label": "temp-linode-packer",
          "image_label": "my-private-packer-image",
          "image_description": "My private packer image",
          "ssh_username": "root"
        }]
      }

      There are three sections to the Packer template file:

      • variables: This section allows you to further configure your template with command-line variables, environment variables, Vault, or variable files. In the section that follows, you will use a command line variable to pass your Linode account’s API token to the template.
      • builders: The builder section contains the definition for the machine image that will be created. In the example template, you use a single builder –the Linode builder. The builder uses the linode/debian9 image as its base and will assign the image a label of my-private-packer-image. It will deploy a 1GB Nanode, take a snapshot, and create a reusable Linode Image. Refer to Packer’s official documentation for a complete Linode Builder configuration reference.

        Note

        You can use multiple builders in a single template file. This process is known as a parallel build which allows you to create multiple images for multiple platforms from a single template.
      • provisioners: (optional) with a provisioner you can further configure your system by completing common system administration tasks, like adding users, installing and configuring software, and more. The example uses Packer’s built-in Ansible provider and executes the tasks defined in the local limited_user_account.yml playbook. This means your Linode image will also contain anything executed by the playbook on your Nanode. Packer supports several other provisioners, like Chef, Salt, and shell scripts.

      Create your Ansible Playbook (Optional)

      In the previous section you created a Packer template that makes use of an Ansible Playbook to add system configurations to your image. Prior to building your image, you will need to create the referenced limited_user_account.yml Playbook. You will complete those steps in this section. If you chose not to use the Ansible provider, you can skip this section.

      1. The example Ansible Playbook makes use of Ansible’s user module. This module requires that a hashed value be used for its password parameter. Use the mkpasswd utility to generate a hashed password that you will use in the next step.

        mkpasswd --method=sha-512
        

        You will be prompted to enter a plain-text password and the utility will return a hash of the password.

          
        Password:
        $6$aISRzCJH4$nNJ/9ywhnH/raHuVCRu/unE7lX.L9ragpWgvD0rknlkbAw0pkLAwkZqlY.ahjj/AAIKo071LUB0BONl.YMsbb0
                  
        
      2. In your packer directory, create a file with the following content. Ensure you replace the value of the password parameter with your own hashed password:

        ~/packer/limited_user_account.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        
        ---
        - hosts: all
          remote_user: root
          vars:
            NORMAL_USER_NAME: 'my-user-name'
          tasks:
            - name: "Create a secondary, non-root user"
              user: name={{ NORMAL_USER_NAME }}
                    password='$6$eebkauNy4h$peyyL1MTN7F4JKG44R27TTmbXlloDUsjPir/ATJue2bL0u8FBk0VuUvrpsMq6rSSOCm8VSip0QHN8bDaD/M/k/'
                    shell=/bin/bash
            - name: Add remote authorized key to allow future passwordless logins
              authorized_key: user={{ NORMAL_USER_NAME }} key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
            - name: Add normal user to sudoers
              lineinfile: dest=/etc/sudoers
                          regexp="{{ NORMAL_USER_NAME }} ALL"
                          line="{{ NORMAL_USER_NAME }}"
        • This Playbook will created a limited user account named my-user-name. You can replace my-user-name, the value of the variable NORMAL_USER_NAME, with any system username you’d like to create. It will then add a public SSH key stored on your local computer. If the public key you’d like to use is stored in a location other than ~/.ssh/id_rsa.pub, you can update that value. Finally, the Playbook adds the new system user to the sudoers file.

      Create your Linode Image

      You should now have your completed template file and your Ansible Playbook file (optional) and can validate the template and finally, build your image.

      1. Validate the template before building your image. Replace the value of my_linode_token with your own Linode API v4 token.

        packer validate -var 'my_linode_token=myL0ngT0kenStr1ng' example.json
        

        If successful, you will see the following:

          
        Template validated successfully.
              
        

        Note

        To learn how to securely store and use your API v4 token, see the Vault Variables section of Packer’s documentation.
      2. You can now build your final image. Replace the value of my_linode_token with your own Linode API v4 token. This process may take a few minutes to complete.

        packer build -var 'my_linode_token=myL0ngT0kenStr1ng' example.json
        
          
        linode output will be in this color.
        
        ==> linode: Running builder ...
        ==> linode: Creating temporary SSH key for instance...
        ==> linode: Creating Linode...
        ==> linode: Using ssh communicator to connect: 192.0.2.0
        ==> linode: Waiting for SSH to become available...
        ==> linode: Connected to SSH!
        ==> linode: Provisioning with Ansible...
        ==> linode: Executing Ansible: ansible-playbook --extra-vars packer_build_name=linode packer_builder_type=linode -o IdentitiesOnly=yes -i /tmp/packer-provisioner-ansible136766862 /home/user/packer/limited_user_account.yml -e ansible_ssh_private_key_file=/tmp/ansible-key642969643
            linode:
            linode: PLAY [all] *********************************************************************
            linode:
            linode: TASK [Gathering Facts] *********************************************************
            linode: ok: [default]
            linode:
            linode: TASK [Create a secondary, non-root user] ***************************************
            linode: changed: [default]
            linode:
            linode: TASK [Add remote authorized key to allow future passwordless logins] ***********
            linode: changed: [default]
            linode:
            linode: TASK [Add normal user to sudoers] **********************************************
            linode: changed: [default]
            linode:
            linode: PLAY RECAP *********************************************************************
            linode: default                    : ok=4    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
            linode:
        ==> linode: Shutting down Linode...
        ==> linode: Creating image...
        Build 'linode' finished.
        
        ==> Builds finished. The artifacts of successful builds are:
        --> linode: Linode image: my-private-packer-image (private/7550080)
              
        

        The output will provide you with your new private image’s ID. In the example output the image ID is private/7550080. This image is now available on your Linode account to use as you desire. As an example, in the next section you will use this newly created image to deploy a new 1 GB Nanode using Linode’s API v4.

      Deploy a Linode with your New Image

      1. Issue the following curl command to deploy a 1GB Nanode to the us-east data center using your new Image to your Linode account. Ensure you replace private/7550080 with your own Linode Image’s ID and assign your own root_pass and label.

        curl -H "Content-Type: application/json" 
          -H "Authorization: Bearer $TOKEN" 
          -X POST -d '{
            "image": "private/7550080",
            "root_pass": "[email protected]",
            "booted": true,
            "label": "my-example-label",
            "type": "g6-nanode-1",
            "region": "us-east"
          }' 
          https://api.linode.com/v4/linode/instances
        

        You should receive a similar response from the API:

          
        {"id": 17882092, "created": "2019-10-23T22:47:47", "group": "", "specs": {"gpus": 0, "transfer": 1000, "memory": 1024, "disk": 25600, "vcpus": 1}, "label": "my-example-linode", "updated": "2019-10-23T22:47:47", "watchdog_enabled": true, "image": null, "ipv4": ["192.0.2.0"], "ipv6": "2600:3c03::f03c:92ff:fe98:6d9a/64", "status": "provisioning", "tags": [], "region": "us-east", "backups": {"enabled": false, "schedule": {"window": null, "day": null}}, "hypervisor": "kvm", "type": "g6-nanode-1", "alerts": {"cpu": 90, "network_in": 10, "transfer_quota": 80, "io": 10000, "network_out": 10}}%
            
        
      2. If you used the Ansible provisioner, once your Linode is deployed, you should be able to SSH into your newly deployed Linode using the limited user account you created with the Ansible playbook and your public SSH key. Your Linode’s IPv4 address will be available in the API response returned after creating the Linode.

        ssh [email protected]
        

      Next Steps

      If you’d like to learn how to use Terraform to deploy Linodes using your Packer created image, you can follow our Terraform guides to get started:

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link