One place for hosting & domains

      Manager

      Deploy NodeBalancers with the Linode Cloud Controller Manager


      Updated by Linode Written by Linode Community

      The Linode Cloud Controller Manager (CCM) allows Kubernetes to deploy Linode NodeBalancers whenever a Service of the “LoadBalancer” type is created. This provides the Kubernetes cluster with a reliable way of exposing resources to the public internet. The CCM handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, the NodeBalancer will service.

      This guide will explain how to:

      • Create a service with the type “LoadBalancer.”
      • Use annotations to control the functionality of the NodeBalancer.
      • Use the NodeBalancer to terminate TLS encryption.

      Caution

      Using the Linode Cloud Controller Manager to create NodeBalancers will create billable resources on your Linode account. A NodeBalancer costs $10 a month. Be sure to follow the instructions at the end of the guide if you would like to delete these resources from your account.

      Before You Begin

      You should have a working knowledge of Kubernetes and familiarity with the kubcetl command line tool before attempting the instructions found in this guide. For more information about Kubernetes, consult our Kubernetes Beginner’s Guide and our Getting Started with Kubernetes guide.

      When using the CCM for the first time, it’s highly suggested that you create a new Kubernetes cluster, as there are a number of issues that prevent the CCM from running on Nodes that are in the “Ready” state. For a completely automated install, you can use the Linode CLI’s k8s-alpha command line tool. The Linode CLI’s k8s-alpha command line tool utilizes Terraform to fully bootstrap a Kubernetes cluster on Linode. It includes the Linode Container Storage Interface (CSI) Driver plugin, the Linode CCM plugin, and the ExternalDNS plugin. For more information on creating a Kubernetes cluster with the Linode CLI, review our How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide.

      Note

      To manually add the Linode CCM to your cluster, you must start kubelet with the --cloud-provider=external flag. kube-apiserver and kube-controller-manager must NOT supply the --cloud-provider flag. For more information, visit the upstream Cloud Controller documentation.

      If you’d like to add the CCM to a cluster by hand, and you are using macOS, you can use the generate-manifest.sh file in the deploy folder of the CCM repository to generate a CCM manifest file that you can later apply to your cluster. Use the following command:

      ./generate-manifest.sh $LINODE_API_TOKEN us-east
      

      Be sure to replace $LINODE_API_TOKEN with a valid Linode API token, and replace us-east with the region of your choosing.

      To view a list of regions, you can use the Linode CLI, or you can view the Regions API endpoint.

      If you are not using macOS, you can copy the ccm-linode-template.yaml file and change the values of the data.apiToken and data.region fields manually.

      Using the CCM

      To use the CCM, you must have a collection of Pods that need to be load balanced, usually from a Deployment. For this example, you will create a Deployment that deploys three NGINX Pods, and then create a Service to expose those Pods to the internet using the Linode CCM.

      1. Create a Deployment manifest describing the desired state of the three replica NGINX containers:

        nginx-deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx-deployment
          labels:
            app: nginx
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
      2. Use the create command to apply the manifest:

        kubectl create -f nginx-deployment.yaml
        
      3. Create a Service for the Deployment:

        nginx-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx-service
          annotations:
            service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          labels:
            app: nginx
        spec:
          type: LoadBalancer
          ports:
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
          selector:
            app: nginx
          sessionAffinity: None

        The above Service manifest includes a few key concepts.

        • The first is the spec.type of LoadBalancer. This LoadBalancer type is responsible for telling the Linode CCM to create a Linode NodeBalancer, and will provide the Deployment it services a public facing IP address with which to access the NGINX Pods.
        • There is additional information being passed to the CCM in the form of metadata annotations (service.beta.kubernetes.io/linode-loadbalancer-throttle in the example above), which are discussed in the next section.
      4. Use the create command to create the Service, and in turn, the NodeBalancer:

        kubectl create -f nginx-service.yaml
        

      You can log in to the Linode Cloud Manager to view your newly created NodeBalancer.

      Annotations

      There are a number of settings, called annotations, that you can use to further customize the functionality of your NodeBalancer. Each annotation should be included in the annotations section of the Service manifest file’s metadata, and all of the annotations are prefixed with service.beta.kubernetes.io/linode-loadbalancer-.

      Annotation (suffix) Values Default Value Description
      throttle 020 (0 disables the throttle) 20 Client Connection Throttle. This limits the number of new connections-per-second from the same client IP.
      protocol tcp, http, https tcp Specifies the protocol for the NodeBalancer.
      tls Example value: [ { "tls-secret-name": "prod-app-tls", "port": 443} ] None A JSON array (formatted as a string) that specifies which ports use TLS and their corresponding secrets. The secret type should be kubernetes.io/tls. Fore more information, see the TLS Encryption section.
      check-type none, connection, http, http_body None The type of health check to perform on Nodes to ensure that they are serving requests. connection checks for a valid TCP handshake, http checks for a 2xx or 3xx response code, http_body checks for a certain string within the response body of the healthcheck URL.
      check-path string None The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-body string None The text that must be present in the body of the page used for health checks. For use with a check-type of http_body.
      check-interval integer None The duration, in seconds, between health checks.
      check-timeout integer (a value between 130) None Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attempts integer (a value between 130) None Number of health checks to perform before removing a back-end Node from service.
      check-passive boolean false When true, 5xx status codes will cause the health check to fail.

      To learn more about checks, please see our reference guide to NodeBalancer health checks.

      TLS Encryption

      This section will describe how to set up TLS termination for a Service so that the Service can be accessed over https.

      Generating a TLS type Secret

      Kubernetes allows you to store secret information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type of tls. Follow the next steps to create a valid tls type Secret:

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.crt -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. To create the secret, you can issue the create secret tls command, being sure to substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --key key.pem --cert cert.crt
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Defining TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port through the proper annotations. Take the following code snippet as an example:

      nginx-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
      ...

      The linode-loadbalancer-protocol annotation identifies the https protocol. Then, the linode-loadbalancer-tls annotation defines which Secret and port to use for serving https traffic. If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      nginx-service-two-environments.yaml
      1
      2
      3
      4
      
      ...
          service.beta.kubernetes.io/linode-loadbalancer-tls: |
            [ { "tls-secret-name": "my-secret", "port": 443 }. {"tls-secret-name": "my-secret-staging", "port": 8443} ]'
      ...

      Next, you’ll need to set up your Service to expose the https port. The whole example might look like the following:

      nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
        labels:
          app: nginx
        name: nginx-service
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        selector:
          app: nginx
        type: LoadBalancer

      Note that here the NodeBalancer created by the Service is terminating the TLS encryption and proxying that to port 80 on the NGINX Pod. If you had a Pod that listened on port 443, you would set the targetPort to that value.

      Session Affinity

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To ensure that traffic is directed to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod:

      session-affinity.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-service
        labels:
          app: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      You can set the timeout for the session by using the spec.sessionAffinityConfig.clientIP.timeoutSeconds field.

      Troubleshooting

      If you are having problems with the CCM, such as the NodeBalancer not being created, you can check the CCM’s error logs. First, you’ll need to find the name of the CCM Pod in the kube-system namespaces:

      kubcetl get pods -n kube-system
      

      The Pod will be named ccm-linode- with five random characters at the end, like ccm-linode-jrvj2. Once you have the Pod name, you can view its logs. The --tail=n flag is used to return the last n lines, where n is the number of your choosing. The below example returns the last 100 lines:

      kubectl logs ccm-linode-jrvj2 -n kube-system --tail=100
      

      Note

      Currently the CCM only supports https ports within a manifest’s spec when the linode-loadbalancer-protocol is set to https. For regular http traffic, you’ll need to create an additional Service and NodeBalancer. For example, if you had the following in the Service manifest:

      unsupported-nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      ...
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      ...

      The NodeBalancer would not be created and you would find an error similar to the following in your logs:

      ERROR: logging before flag.Parse: E0708 16:57:19.999318       1 service_controller.go:219] error processing service default/nginx-service (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      ERROR: logging before flag.Parse: I0708 16:57:19.999466       1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"5d1afc22-a1a1-11e9-ad5d-f23c919aa99b", APIVersion:"v1", ResourceVersion:"1248179", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      

      Removing the http port would allow you to create the NodeBalancer.

      Delete a NodeBalancer

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f nginx-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service nginx-service
      

      Updating the CCM

      The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so, you can run the edit command.

      kubectl edit ds -n kube-system ccm-linode
      

      The CCM Daemonset manifest will appear in vim. Press i to enter insert mode. Navigate to spec.template.spec.image and change the field’s value to the desired version tag. For instance, if you had the following image:

      image: linode/linode-cloud-controller-manager:v0.2.2
      

      You could update the image to v0.2.3 by changing the image tag:

      image: linode/linode-cloud-controller-manager:v0.2.3
      

      For a complete list of CCM version tags, visit the CCM DockerHub page.

      Caution

      The CCM Daemonset manifest may list latest as the image version tag. This may or may not be pointed at the latest version. To ensure the latest version, it is recommended to first check the CCM DockerHub page, then use the most recent release.

      Press escape to exit insert mode, then type :wq and press enter to save your changes. A new Pod will be created with the new image, and the old Pod will be deleted.

      Next Steps

      To further take advantage of Linode products through Kubernetes, check out our guide on how to use the Linode Container Storage Interface (CSI), which allows you to create persistent volumes backed by Linode Block Storage.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Add CAA Records in the Linode Cloud Manager


      Updated by Linode

      Written by Linode

      Certification Authority Authorization (CAA) is a type of DNS record that allows the owner of a domain to specify which certificate authority (or authorities) are allowed to issue SSL/TLS certificates for their domain(s). This quick answer shows you how to set up CAA records on your Linode.

      Add a Single CAA Record

      1. Log in to the Linode Cloud Manager

      2. Select the Domains link in the sidebar.

      3. Select the domain you want to add the record to, or add a domain if you don’t already have one listed.

      4. Under the CAA Record section, select Add a CAA record. A form with the following fields will appear:

        Name: The subdomain you want the CAA record to cover. To apply it to your entire website (for example: example.com), leave this field blank. To limit the record’s application to a subdomain on your site, (for example: subdomain.example.com), enter the subdomain’s name into the form field (for example: subdomain).

        Tag:

        • issue – Authorize the certificate authority entered in the Value field further below to issue TLS certificates for your site.

        • issuewild – Same as above, with the exception that you were issued a wildcard certificate.

        • iodef – URL where your CA can report security policy violations to you concerning certificate issue requests.

        Value: If the issue or issuewild tag was selected above, then the Value field takes the domain of your certificate issuer (for example: letsencrypt.org). If the iodef tag was selected, the Value field takes a contact or submission URL (http or mailto).

        TTL (Time to Live): Time in seconds that your new CAA record will be cached by Linode’s name servers before being refreshed. The Default selection’s TTL is 300 seconds, which is fine for most cases. You can use the dig command to view the remaining time your DNS records will be cached until refreshed. Replace linode.com with your site’s domain or subdomain in the command below:

        root@debian:~# dig +nocmd +noall +answer example.com
        example.com.     167 IN  A   203.0.113.1
        
      5. Click the Save button when finished. The CAA record should be fully propagated within the TTL duration.

      Add Multiple CAA Records

      Multiple CAA records must be added individually. If your site example.com was issued a TLS certificate by Let’s Encrypt, but your subdomain store.example.com uses a Symantec certificate, you would need two different CAA records. A reporting URL for the iodef tag would also need its own record. Those three would look something like this:

      Multiple CAA records

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      An Introduction to Helm, the Package Manager for Kubernetes


      Introduction

      Deploying applications to Kubernetes – the powerful and popular container-orchestration system – can be complex. Setting up a single application can involve creating multiple interdependent Kubernetes resources – such as pods, services, deployments, and replicasets – each requiring you to write a detailed YAML manifest file.

      Helm is a package manager for Kubernetes that allows developers and operators to more easily package, configure, and deploy applications and services onto Kubernetes clusters.

      Helm is now an official Kubernetes project and is part of the Cloud Native Computing Foundation, a non-profit that supports open source projects in and around the Kubernetes ecosystem.

      In this article we will give an overview of Helm and the various abstractions it uses to simplify deploying applications to Kubernetes. If you are new to Kubernetes, it may be helpful to read An Introduction to Kubernetes first to familiarize yourself with the basics concepts.

      An Overview of Helm

      Most every programming language and operating system has its own package manager to help with the installation and maintenance of software. Helm provides the same basic feature set as many of the package managers you may already be familiar with, such as Debian’s apt, or Python’s pip.

      Helm can:

      • Install software.
      • Automatically install software dependencies.
      • Upgrade software.
      • Configure software deployments.
      • Fetch software packages from repositories.

      Helm provides this functionality through the following components:

      • A command line tool, helm, which provides the user interface to all Helm functionality.
      • A companion server component, tiller, that runs on your Kubernetes cluster, listens for commands from helm, and handles the configuration and deployment of software releases on the cluster.
      • The Helm packaging format, called charts.
      • An official curated charts repository with prepackaged charts for popular open-source software projects.

      We’ll investigate the charts format in more detail next.

      Charts

      Helm packages are called charts, and they consist of a few YAML configuration files and some templates that are rendered into Kubernetes manifest files. Here is the basic directory structure of a chart:

      Example chart directory

      package-name/
        charts/
        templates/
        Chart.yaml
        LICENSE
        README.md
        requirements.yaml
        values.yaml
      

      These directories and files have the following functions:

      • charts/: Manually managed chart dependencies can be placed in this directory, though it is typically better to use requirements.yaml to dynamically link dependencies.
      • templates/: This directory contains template files that are combined with configuration values (from values.yaml and the command line) and rendered into Kubernetes manifests. The templates use the Go programming language’s template format.
      • Chart.yaml: A YAML file with metadata about the chart, such as chart name and version, maintainer information, a relevant website, and search keywords.
      • LICENSE: A plaintext license for the chart.
      • README.md: A readme file with information for users of the chart.
      • requirements.yaml: A YAML file that lists the chart’s dependencies.
      • values.yaml: A YAML file of default configuration values for the chart.

      The helm command can install a chart from a local directory, or from a .tar.gz packaged version of this directory structure. These packaged charts can also be automatically downloaded and installed from chart repositories or repos.

      We’ll look at chart repositories next.

      Chart Repositories

      A Helm chart repo is a simple HTTP site that serves an index.yaml file and .tar.gz packaged charts. The helm command has subcommands available to help package charts and create the required index.yaml file. These files can be served by any web server, object storage service, or a static site host such as GitHub Pages.

      Helm comes preconfigured with a default chart repository, referred to as stable. This repo points to a Google Storage bucket at https://kubernetes-charts.storage.googleapis.com. The source for the stable repo can be found in the helm/charts Git repository on GitHub.

      Alternate repos can be added with the helm repo add command. Some popular alternate repositories are:

      Whether you’re installing a chart you’ve developed locally, or one from a repo, you’ll need to configure it for your particular setup. We’ll look into configs next.

      Chart Configuration

      A chart usually comes with default configuration values in its values.yaml file. Some applications may be fully deployable with default values, but you’ll typically need to override some of the configuration to meet your needs.

      The values that are exposed for configuration are determined by the author of the chart. Some are used to configure Kubernetes primitives, and some may be passed through to the underlying container to configure the application itself.

      Here is a snippet of some example values:

      values.yaml

      service:
        type: ClusterIP
        port: 3306
      

      These are options to configure a Kubernetes Service resource. You can use helm inspect values chart-name to dump all of the available configuration values for a chart.

      These values can be overridden by writing your own YAML file and using it when running helm install, or by setting options individually on the command line with the --set flag. You only need to specify those values that you want to change from the defaults.

      A Helm chart deployed with a particular configuration is called a release. We will talk about releases next.

      Releases

      During the installation of a chart, Helm combines the chart’s templates with the configuration specified by the user and the defaults in value.yaml. These are rendered into Kubernetes manifests that are then deployed via the Kubernetes API. This creates a release, a specific configuration and deployment of a particular chart.

      This concept of releases is important, because you may want to deploy the same application more than once on a cluster. For instance, you may need multiple MySQL servers with different configurations.

      You also will probably want to upgrade different instances of a chart individually. Perhaps one application is ready for an updated MySQL server but another is not. With Helm, you upgrade each release individually.

      You might upgrade a release because its chart has been updated, or because you want to update the release’s configuration. Either way, each upgrade will create a new revision of a release, and Helm will allow you to easily roll back to previous revisions in case there’s an issue.

      Creating Charts

      If you can’t find an existing chart for the software you are deploying, you may want to create your own. Helm can output the scaffold of a chart directory with helm create chart-name. This will create a folder with the files and directories we discussed in the Charts section above.

      From there, you’ll want to fill out your chart’s metadata in Chart.yaml and put your Kubernetes manifest files into the templates directory. You’ll then need to extract relevant configuration variables out of your manifests and into values.yaml, then include them back into your manifest templates using the templating system.

      The helm command has many subcommands available to help you test, package, and serve your charts. For more information, please read the official Helm documentation on developing charts.

      Conclusion

      In this article we reviewed Helm, the package manager for Kubernetes. We overviewed the Helm architecture and the individual helm and tiller components, detailed the Helm charts format, and looked at chart repositories. We also looked into how to configure a Helm chart and how configurations and charts are combined and deployed as releases on Kubernetes clusters. Finally, we touched on the basics of creating a chart when a suitable chart isn’t already available.

      For more information about Helm, take a look at the official Helm documentation. To find official charts for Helm, check out the official helm/charts Git repository on GitHub.



      Source link