One place for hosting & domains

      Engine

      How to Deploy a Linode Kubernetes Engine Cluster Using Terraform


      Updated by Linode Contributed by Linode

      What is the Linode Kubernetes Engine (LKE)?

      The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy a LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE Cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.

      In this Guide

      This guide will walk you through the steps needed to deploy a Kubernetes cluster using LKE and the popular infrastructure as code (IaC) tool, Terraform. Throughout the guide you will:

      Before you Begin

      1. Create a personal access token for Linode’s API v4. Follow the Getting Started with the Linode API to get a token. You will need a token to be able to create Linode resources using Terraform.

        Note

        Ensure that your token has, at minimum, Read/Write permissions for Linodes, Kubernetes, NodeBalancers, and Volumes.

      2. Review the A Beginner’s Guide to Terraform to familiarize yourself with Terraform concepts if you have not used the tool before. This guide assumes familiarity with Terraform and its native HCL syntax.

      Prepare your Local Environment

      Install Terraform

      Install Terraform on your computer by following the Install Terraform section of our Use Terraform to Provision Linode Environments guide.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest kubectl release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Create your Terraform Configuration Files

      In this section, you will create Terraform configuration files that define the resources needed to create a Kubernetes cluster. You will create a main.tf file to store your resource declarations, a variables.tf file to store your input variable definitions, and a terraform.tfvars file to assign values to your input variables. Setting up your Terraform project in this way will allow you to reuse your configuration files to deploy more Kubernetes clusters, if desired.

      Create your Resource Configuration File

      Terraform defines the elements of your Linode infrastructure inside of configuration files. Terraform refers to these infrastructure elements as resources. Once you declare your Terraform configuration, you then apply it, which results in the creation of those resources on the Linode platform. The Linode Provider for Terraform exposes the Linode resources you will need to deploy a Kubernetes cluster using LKE.

      1. Navigate to the directory where you installed Terraform. Replace ~/terraform with the location of your installation.

        cd ~/terraform
        
      2. Create a new directory to store your LKE cluster’s Terraform configurations. Replace lke-cluster with your preferred directory name.

        mkdir lke-cluster
        
      3. Using the text editor of your choice, create your cluster’s main configuration file named main.tf which will store your resource definitions. Add the following contents to the file.

        ~/terraform/lke-cluster/main.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        
        //Use the Linode Provider
        provider "linode" {
          token = var.token
        }
        
        //Use the linode_lke_cluster resource to create
        //a Kubernetes cluster
        resource "linode_lke_cluster" "foobar" {
            k8s_version = var.k8s_version
            label = var.label
            region = var.region
            tags = var.tags
        
            dynamic "pool" {
                for_each = var.pools
                content {
                    type  = pool.value["type"]
                    count = pool.value["count"]
                }
            }
        }
        
        //Export this cluster's attributes
        output "kubeconfig" {
           value = linode_lke_cluster.foobar.kubeconfig
        }
        
        output "api_endpoints" {
           value = linode_lke_cluster.foobar.api_endpoints
        }
        
        output "status" {
           value = linode_lke_cluster.foobar.status
        }
        
        output "id" {
           value = linode_lke_cluster.foobar.id
        }
        
        output "pool" {
           value = linode_lke_cluster.foobar.pool
        }
            

        This file contains your cluster’s main configuration arguments and output variables. In this example, you make use of Terraform’s input variables so that your main.tf configuration can be easily reused across different clusters.

        Variables and their values will be created in separate files later on in this guide. Using separate files for variable declaration allows you to avoid hard-coding values into your resources. This strategy can help you reuse, share, and version control your Terraform configurations.

        This configuration file uses the Linode provider to create a Kubernetes cluster. All arguments within the linode_lke_cluster.foobar resource are required, except for tags. The pool argument accepts a list of pool objects. In order to read their input variable values, the configuration file makes use of Terraform’s dynamic blocks. Finally, output values are declared in order to capture your cluster’s attribute values that will be returned to Terraform after creating your cluster.

        Note

        For a complete linode_lke_cluster resource argument reference, see the Linode Provider Terraform documentation. You can update the main.tf file to include any additional arguments you would like to use.

      Define your Input Variables

      You are now ready to define the input variables that were referenced in your main.tf file.

      1. Create a new file named variables.tf in the same directory as your main.tf file. Add the following contents to the file:

        ~/terraform/lke-cluster/variables.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        
            variable "token" {
              description = "Your Linode API Personal Access Token.(required)."
            }
        
            variable "k8s_version" {
              description = "The Kubernetes version to use for this cluster.(required)"
              default = "1.17"
            }
        
            variable "label" {
              description = "The unique label to assign to this cluster.(required)"
              default = "default-lke-cluster"
            }
        
            variable "region" {
              description = "The region where your cluster will be located.(required)"
              default = "us-east"
            }
        
            variable "tags" {
              description = "Tags to apply to your cluster for organizational purposes.(optional)"
              type = list(string)
              default = ["testing"]
            }
        
            variable "pools" {
              description = "The Node Pool specifications for the Kubernetes cluster.(required)"
              type = list(object({
                type = string
                count = number
              }))
              default = [
                {
                  type = "g6-standard-4"
                  count = 3
                },
                {
                  type = "g6-standard-8"
                  count = 3
                }
              ]
            }
            

        This file describes each variable and provides them with default values. You can update the file with your own preferred default values.

      Assign Values to your Input Variables

      You will now need to define the values you would like to use in order to create your Kubernetes cluster. These values are stored in a separate file named terraform.tfvars. This file should be the only file that requires updating when reusing the files created in this guide to deploy a new Kubernetes cluster or to add a new node pool to the cluster.

      1. Create a new file named terraform.tfvars to provide values for all the input variables declared in the previous section.

        Note

        If you leave out a variable value in this file, Terraform will use the variable’s default value that you provided in your variables.tf file.

        $~/terraform/lke-cluster/terraform.tfvars
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        label = "example-lke-cluster"
        k8s_version = "1.17"
        region = "us-west"
        pools = [
          {
            type : "g6-standard-2"
            count : 3
          }
        ]
              

        Terraform will use the values in this file to create a new Kubernetes cluster with one node pool that contains three 4 GB nodes. The cluster will be located in the us-west data center (Dallas, Texas, USA). Each node in the cluster’s node pool will use Kubernetes version 1.17 and the cluster will be named example-lke-cluster. You can replace any of the values in this file with your own preferred cluster configurations.

      Deploy your Kubernetes Cluster

      Now that all your Terraform configuration files are ready, you can deploy your Kubernetes cluster.

      1. Ensure that you are in your lke-cluster project directory which should contain all of your Terraform configuration files. If you followed the naming conventions used in this guide, your project directory will be ~/terraform/lke-cluster.

        cd ~/terraform/lke-cluster
        
      2. Install the Linode Provider to your Terraform project directory. Whenever a new provider is used in a Terraform configuration, it must be initialized before you can create resources with it.

        terraform init
        

        You will see a message that confirms that the Linode provider plugins have been successfully initialized.

      3. Export your API token to an environment variable. Terraform environment variables have the prefix TF_VAR_ and are supplied at the command line. This method is preferable over storing your token in a plain text file. Replace the example’s token value with your own.

        export TF_VAR_token=70a1416a9.....d182041e1c6bd2c40eebd
        

        Caution

        This method commits the environment variable to your shell’s history, so take care when using this method.

      4. View your Terraform’s execution plan before deploying your infrastructure. This command won’t take any actions or make any changes on your Linode account. It will provide a report displaying all the resources that will be created or modified when the plan is executed.

        terraform plan 
        -var-file="terraform.tfvars"
        
      5. Apply your Terraform configurations to deploy your Kubernetes cluster.

        terraform apply 
        -var-file="terraform.tfvars"
        

        Terraform will begin to create the resources you’ve defined throughout this guide. This process will take several minutes to complete. Once the cluster has been successfully created the output will include a success message and the values that you exposed as output when creating your main.tf file (the example output has been truncated for brevity).

          
        Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
        
        Outputs:
        
        api_endpoints = [
          "https://91132f3d-fd20-4a70-a171-06ddec5d9c4d.us-west-2.linodelke.net:443",
          "https://91132f3d-fd20-4a70-a171-06ddec5d9c4d.us-west-2.linodelke.net:6443",
          "https://192.0.2.0:443",
          "https://192.0.2.0:6443",
        ]
        ...
                  
        

      Connect to your LKE Cluster

      Now that your Kubernetes cluster is deployed, you can use kubectl to connect to it and begin defining your workload. In this section, you will access your cluster’s kubeconfig and use it to connect to your cluster with kubectl.

      1. Use Terraform to access your cluster’s kubeconfig, decode its contents, and save them to a file. Terraform returns a base64 encoded string (a useful format for automated pipelines) representing your kubeconfig. Replace lke-cluster-config.yaml with your preferred file name.

        export KUBE_VAR=`terraform output kubeconfig` && echo $KUBE_VAR | base64 -D > lke-cluster-config.yaml
        

        Note

        Depending on your local operating system, to decode the kubeconfig’s base64 format, you may need to replace base64 -D with base64 -d. For example, this is update is needed on an Ubuntu 18.04 system.

      2. Add the kubeconfig file to your $KUBECONFIG environment variable. This will give kubectl access to your cluster’s kubeconfig file.

        export KUBECONFIG=lke-cluster-config.yaml
        
      3. Verify that your cluster is selected as kubectl’s current context:

        kubectl config get-contexts
        
      4. View all nodes in your Kubernetes cluster using kubectl:

        kubectl get nodes
        

        Your output will resemble the following example, but will vary depending on your own cluster’s configurations.

          
        NAME                        STATUS   ROLES    AGE   VERSION
        lke4377-5673-5eb331ac7f89   Ready       17h   v1.17.0
        lke4377-5673-5eb331acab1d   Ready       17h   v1.17.0
        lke4377-5673-5eb331acd6c2   Ready       17h   v1.17.0
            
        

        Now that you are connected to your LKE cluster, you can begin using kubectl to deploy applications, inspect and manage cluster resources, and view logs.

      Destroy your Kubernetes Cluster (optional)

      Terraform includes a destroy command to remove resources managed by Terraform.

      1. Run the plan command with the -destroy option to verify which resources will be destroyed.

        terraform plan -destroy
        

        Follow the prompt to enter your Linode API v4 access token and review the report to ensure the resources you expect to be destroyed are listed.

      2. Destroy the resources outlined in the above command.

        terraform destroy
        

        Follow the prompt to enter your Linode API v4 access token and type in yes when prompted to destroy your Kubernetes cluster.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Set Up a Private Docker Registry with Linode Kubernetes Engine and Object Storage


      Updated by Leslie Salazar Contributed by Leslie Salazar

      Marquee image for How to Set Up a Private Docker Registry with Linode Kubernetes Engine and Object Storage

      Hosting a private Docker registry alongside your Kubernetes cluster allows you to securely manage your Docker images while also providing quick deployment of your apps. This guide will walk you through the steps needed to deploy a private Docker registry on a Linode Kubernetes Engine (LKE) cluster. At the end of this tutorial, you will be able to locally push and pull Docker images to your registry. Similarly, your LKE cluster’s pods will also be able to pull Docker images from the registry to complete their deployments.

      Before you Begin

      Note

      1. Deploy a LKE Cluster. This example was written using a node pool with two 2 GB nodes. Depending on the workloads you will be deploying on your cluster, you may consider using nodes with higher resources.

      2. Install Helm 3, kubectl, and Docker to your local environment.

        Note

      3. Ensure Object Storage is enabled on your Linode account, generate an Object Storage key pair and ensure you save it in a secure location. You will need the key pair for a later section in this guide. Finally create an Object Storage bucket to store your registry’s images. Throughout this guide, the example bucket name will be registry.

      4. Purchase a domain name from a reliable domain registrar. Using Linode’s DNS Manager, create a new Domain and add an DNS “A” record for a subdomain named registry. Your subdomain will host your Docker registry. This guide will use registry.example.com as the example domain.

        Note

        Optionally, you can create a Wildcard DNS record, *.example.com. In a later section, you will point your DNS A record to a Linode NodeBalancer’s external IP address. Using a Wildcard DNS record, will allow you to expose your Kubernetes services without requiring further configuration using the Linode DNS Manager.

      In this Guide

      In this guide you will:

      Install the NGINX Ingress Controller

      An Ingress is used to provide external routes, via HTTP or HTTPS, to your cluster’s services. An Ingress Controller, like the NGINX Ingress Controller, fulfills the requirements presented by the Ingress using a load balancer.

      In this section, you will install the NGINX Ingress Controller using Helm, which will create a Linode NodeBalancer to handle your cluster’s traffic.

      1. Add the Google stable Helm charts repository to your Helm repos:

        helm repo add stable https://kubernetes-charts.storage.googleapis.com/
        
      2. Update your Helm repositories:

        helm repo update
        
      3. Install the NGINX Ingress Controller. This installation will result in a Linode NodeBalancer being created.

        helm install nginx-ingress stable/nginx-ingress
        

        You will see a similar output after issuing the above command (the output has been truncated for brevity):

          
        NAME: my-nginx-ingress
        LAST DEPLOYED: Wed Apr  8 09:55:47 2020
        NAMESPACE: default
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        NOTES:
        The nginx-ingress controller has been installed.
        It may take a few minutes for the LoadBalancer IP to be available.
        You can watch the status by running 'kubectl --namespace default get services -o wide -w my-nginx-ingress-controller'
        ...
            
        

        In the next section, you will use your Linode NodeBalancer’s external IP address to update your registry’s domain record.

      Update your Subdomain’s IP Address

      1. Access your NodeBalancer’s assigned external IP address.

        kubectl --namespace default get services -o wide -w nginx-ingress-controller
        

        The command will return a similar output:

          
        NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE     SELECTOR
        my-nginx-ingress-controller   LoadBalancer   10.128.169.60   192.0.2.0   80:32401/TCP,443:30830/TCP   7h51m   app.kubernetes.io/component=controller,app=nginx-ingress,release=my-nginx-ingress
            
        
      2. Copy the IP address of the EXTERNAL IP field and navigate to Linode’s DNS manager and update your domain’s’ registry A record with the external IP address. Ensure that the entry’s TTL field is set to 5 minutes.

      Now that your NGINX Ingress Controller has been deployed and your subdomain’s A record has been updated, you are ready to enable HTTPS on your Docker registry.

      Enable HTTPS

      Note

      Before performing the commands in this section, ensure that your DNS has had time to propagate across the internet. This process can take several hours. You can query the status of your DNS by using the following command, substituting registry.example.com for your subdomain and domain.

      dig +short registry.example.com
      

      If successful, the output should return the IP address of your NodeBalancer.

      To enable HTTPS on your Docker registry, you will create a Transport Layer Security (TLS) certificate from the Let’s Encrypt certificate authority (CA) using the ACME protocol. This will be facilitated by cert-manager, the native Kubernetes certificate management controller.

      In this section you will install cert-manager using Helm and the required cert-manager CustomResourceDefinitions (CRDs). Then, you will create a ClusterIssuer and Certificate resource to create your cluster’s TLS certificate.

      Install cert-manager

      1. Install cert-manager’s CRDs.

        kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager.crds.yaml
        
      2. Create a cert-manager namespace.

        kubectl create namespace cert-manager
        
      3. Add the Helm repository which contains the cert-manager Helm chart.

        helm repo add jetstack https://charts.jetstack.io
        
      4. Update your Helm repositories.

        helm repo update
        
      5. Install the cert-manager Helm chart. These basic configurations should be sufficient for many use cases, however, additional cert-manager configurable parameters can be found in cert-manager’s official documentation.

        helm install 
        cert-manager jetstack/cert-manager 
        --namespace cert-manager 
        --version v0.14.1
        
      6. Verify that the corresponding cert-manager pods are now running.

        kubectl get pods --namespace cert-manager
        

        You should see a similar output:

          
        NAME                                       READY   STATUS    RESTARTS   AGE
        cert-manager-579d48dff8-84nw9              1/1     Running   3          1m
        cert-manager-cainjector-789955d9b7-jfskr   1/1     Running   3          1m
        cert-manager-webhook-64869c4997-hnx6n      1/1     Running   0          1m
            
        

      Create a ClusterIssuer Resource

      Now that cert-manager is installed and running on your cluster, you will need to create a ClusterIssuer resource which defines which CA can create signed certificates when a certificate request is received. A ClusterIssuer is not a namespaced resource, so it can be used by more than one namespace.

      1. Create a directory named registry to store all of your Docker registry’s related manifest files and move into the new directory.

        mkdir ~/registry && cd ~/registry
        
      2. Using the text editor of your choice, create a file named acme-issuer-prod.yaml with the example configurations. Replace the value of email with your own email address.

        ~/registry/acme-issuer-prod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        
        apiVersion: cert-manager.io/v1alpha2
        kind: ClusterIssuer
        metadata:
          name: letsencrypt-prod
        spec:
          acme:
            email: [email protected]
            server: https://acme-v02.api.letsencrypt.org/directory
            privateKeySecretRef:
              name: letsencrypt-secret-prod
            solvers:
            - http01:
                ingress:
                  class: nginx
            
        • This manifest file creates a ClusterIssuer resource that will register an account on an ACME server. The value of spec.acme.server designates Let’s Encrypt’s production ACME server, which should be trusted by most browsers.

          Note

          Let’s Encrypt provides a staging ACME server that can be used to test issuing trusted certificates, while not worrying about hitting Let’s Encrypt’s production rate limits. The staging URL is https://acme-staging-v02.api.letsencrypt.org/directory.
        • The value of privateKeySecretRef.name provides the name of a secret containing the private key for this user’s ACME server account (this is tied to the email address you provide in the manifest file). The ACME server will use this key to identify you.

        • To ensure that you own the domain for which you will create a certificate, the ACME server will issue a challenge to a client. cert-manager provides two options for solving challenges, http01 and DNS01. In this example, the http01 challenge solver will be used and it is configured in the solvers array. cert-manager will spin up challenge solver Pods to solve the issued challenges and use Ingress resources to route the challenge to the appropriate Pod.

      3. Create the ClusterIssuer resource:

        kubectl create -f acme-issuer-prod.yaml
        

      Create a Certificate Resource

      After you have a ClusterIssuer resource, you can create a Certificate resource. This will describe your x509 public key certificate and will be used to automatically generate a CertificateRequest which will be sent to your ClusterIssuer.

      1. Using the text editor of your choice, create a file named certificate-prod.yaml with the example configurations. Replace the value of email with your own email address. Replace the value of spec.dnsNames with your own domain that you will use to host your Docker registry.

        ~/registry/certificate-prod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        apiVersion: cert-manager.io/v1alpha2
        kind: Certificate
        metadata:
          name: docker-registry-prod
        spec:
          secretName: letsencrypt-secret-prod
          duration: 2160h # 90d
          renewBefore: 360h # 15d
          issuerRef:
            name: letsencrypt-prod
            kind: ClusterIssuer
          dnsNames:
          - registry.example.com
            

        Note

        The configurations in this example create a Certificate that is valid for 90 days and renews 15 days before expiry.

      2. Create the Certificate resource:

        kubectl create -f certificate-prod.yaml
        
      3. Verify that the Certificate has been successfully issued:

        kubectl get certs
        

        When your certificate is ready, you should see a similar output:

          
        NAME                   READY   SECRET                    AGE
        docker-registry-prod   True    letsencrypt-secret-prod   42s
            
        

        All the necessary components are now in place to be able to enable HTTPS on your Docker registry. In the next section, you will complete the steps need to deploy your registry.

      Deploy your Docker Registry

      You will now complete the steps to deploy your Docker Registry to your Kubernetes cluster using a Helm chart. Prior to deploying your Docker registry, you will first need to create a username and password in order to enable basic authentication for your registry. This will allow you to restrict access to your Docker registry which will keep your images private. Since, your registry will require authentication, a Kubernetes secret will be added to your cluster in order to provide your cluster with your registry’s authentication credentials, so that it can pull images from it.

      Enable Basic Authentication

      To enabled basic access restriction for your Docker registry, you will use the htpasswd utility. This utility allows you to use a file to store usernames and passwords for basic HTTP authentication. This will require you to log into your Docker registry prior to being able to push or pull images from and to it.

      1. Install the htpasswd utility. This example is for an Ubuntu 18.04 instance, but you can use your system’s package manger to install it.

        sudo apt install apache2-utils -y
        
      2. Create a file to store your Docker registry’s username and password.

        touch my_docker_pass
        
      3. Create a username and password using htpasswd. Replace example_user with your own username. Follow the prompt to create a password.

        htpasswd -B my_docker_pass example_user
        
      4. View the contents of your password file.

        cat my_docker_pass
        

        Your output will resemble the following. You will need these values when deploying your registry in the Configure your Docker Registry section of the guide.

          
        example_user:$2y$05$8VhvzCVCB4txq8mNGh8eu.8GMyBEEeUInqQJHKJUD.KUwxastPG4m
          
        

      Grant your Cluster Access to your Docker Registry

      Your LKE Cluster will also need to authenticate to your Docker registry in order to pull images from it. In this section, you will create a Kubernetes Secret that you can use to grant your cluster’s kubelet with access to your registry’s images.

      1. Create a secret to store your registry’s authentication information. Replace the option values with your own registry’s details. The --docker-username and --docker-password should be the username and password that you used when generating credentials using the htpasswd utility.

        kubectl create secret docker-registry regcred 
          --docker-server=registry.example.com 
          --docker-username=example_user 
          --docker-password=3xampl3Passw0rd 
          [email protected]
        

      Configure your Docker Registry

      Before deploying the Docker Registry Helm chart to your cluster, you will define some configurations so that the Docker registry uses the NGINX Ingress controller, your registry Object Storage bucket, and your cert-manager created TLS certificate. See the Docker Registry Helm Chart’s official documentation for a full list of all available configurations.

      Note

      1. Create a new file named docker-configs.yaml using the example configurations. Ensure you replace the following values in your file:

        • ingress.hosts with your own Docker registry’s domain
        • ingress.tls.secretName with the secret name you used when creating your ClusterIssuer
        • ingress.tls.secretName.hosts with the domain for which you wish to secure with your TLS certificate.
        • secrets.s3.accessKey with the value of your Object Storage account’s access key and secrets.s3.secretKey with the corresponding secret key.
        • secrets.htpasswd with the value returned when you view the contents of your my_docker_pass file. However, ensure you do not remove the |- characters. This ensures that your YAML is properly formatted. See step 4 in the Enable Basic Authentication section for details on viewing the contents of your password file.
        • s3.region with your Object Storage bucket’s cluster region, s3.regionEndpoint with your Object Storage bucket’s region endpoint, and s3.bucket with your registry’s Object Storage bucket name.
        ~/registry/docker-configs.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        
        ingress:
          enabled: true
          hosts:
            - registry.example.com
          annotations:
            kubernetes.io/ingress.class: nginx
            cert-manager.io/cluster-issuer: letsencrypt-prod
            nginx.ingress.kubernetes.io/proxy-body-size: "0"
            nginx.ingress.kubernetes.io/proxy-read-timeout: "6000"
            nginx.ingress.kubernetes.io/proxy-send-timeout: "6000"
          tls:
            - secretName: letsencrypt-secret-prod
              hosts:
              - registry.example.com
        storage: s3
        secrets:
          htpasswd: |-
            example_user:$2y$05$8VhvzCVCB4txq8mNGh8eu.8GMyBEEeUInqQJHKJUD.KUwxastPG4m
          s3:
            accessKey: "myaccesskey"
            secretKey: "mysecretkey"
        s3:
          region: us-east-1
          regionEndpoint: us-east-1.linodeobjects.com/
          secure: true
          bucket: registry
              
        • The NGINX Ingress annotation nginx.ingress.kubernetes.io/proxy-body-size: "0" disables a maximum allowed size client request body check and ensures that you won’t receive a 413 error when pushing larger Docker images to your registry. The values for nginx.ingress.kubernetes.io/proxy-read-timeout: "6000" and nginx.ingress.kubernetes.io/proxy-send-timeout: "6000" are sane values to begin with, but may be adjusted as needed.
      2. Deploy your Docker registry using the configurations you created in the previous step:

        helm install docker-registry stable/docker-registry -f docker-configs.yaml
        
      3. Navigate to your registry’s domain and verify that your browser loads the TLS certificate.

        Verify that your Docker registry's site loads your TLS certificate

        You will interact with your registry via the Docker CLI, so you should not expect to see any content load on the page.

      Push an Image to your Docker Registry

      You are now ready to push and pull images to your Docker registry. In this section you will pull an existing image from Docker Hub and then push it to your registry. Then, in the next section, you will use your registry’s image to deploy an example static site.

      1. Use Docker to pull an image from Docker Hub. This example is using an image that was created following our Create and Deploy a Docker Container Image to a Kubernetes Cluster guide. The image will build a Hugo static site with some boiler plate content. However, you can use any image from Docker Hub that you prefer.

        sudo docker pull leslitagordita/hugo-site:v10
        
      2. Tag your local Docker image with your private registry’s hostname. This is required when pushing an image to a private registry and not the central Docker registry. Ensure that you replace registry.example.com with your own registry’s domain.

        sudo docker tag leslitagordita/hugo-site:v10 registry.example.com/leslitagordita/hugo-site:v10
        
      3. At this point, you have never authenticated to your private registry. You will need to log into it prior to pushing up any images. Issue the example command, replacing registry.example.com with your own registry’s URL. Follow the prompts to enter in the username and password you created in the Enable Basic Authentication section.

        sudo docker login registry.example.com
        
      4. Push the image to your registry. Ensure that you replace registry.example.com with your own registry’s domain.

        sudo docker push registry.example.com/leslitagordita/hugo-site:v10
        

        You should see a similar output when your image push is complete

          
        The push refers to repository [registry.example.com/leslitagordita/hugo-site]
        925cbd794bd8: Pushed
        b9fee92b7ac7: Pushed
        1658c062e6a8: Pushed
        21acf2dde3fe: Pushed
        588c407f9029: Pushed
        bcf2f368fe23: Pushed
        v10: digest: sha256:3db7ab6bc5a893375af6f7cf505bac2f4957d8a03701d7fd56853712b0900312 size: 1570
            
        

      Create a Test Deployment Using an Image from Your Docker Registry

      In this section, you will create a test deployment using the image that you pushed to your registry in the previous section. This will ensure that your cluster can authenticate to your Docker registry and pull images from it.

      1. Using Linode’s DNS manager to create a new subdomain A record to host your static site. The example will use static.example.com. When creating your record, assign your cluster’s NodeBalancer external IP address as the IP address. You can find the external IP address with the following command:

        kubectl --namespace default get services -o wide -w nginx-ingress-controller
        

        The command will return a similar output. Use the value of the EXTERNAL-IP field to create your static site’s new subdomain A record.

          
        NAME                          TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE     SELECTOR
        nginx-ingress-controller   LoadBalancer   10.128.169.60   192.0.2.0   80:32401/TCP,443:30830/TCP   7h51m   app.kubernetes.io/component=controller,app=nginx-ingress,release=nginx-ingress
            
        
      2. Using a text editor, create the static-site-test.yaml file with the example configurations. This file will create a deployment, service, and an ingress.

        ~/registry/staic-site-test.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        
        apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
          name: static-site-ingress
          annotations:
            kubernetes.io/ingress.class: nginx
            nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
        spec:
          rules:
          - host: static.example.com
            http:
              paths:
              - path: /
                backend:
                  serviceName: static-site
                  servicePort: 80
        ---
        apiVersion: v1
        kind: Service
        metadata:
          name: static-site
        spec:
          type: NodePort
          ports:
          - port: 80
            targetPort: 80
          selector:
            app: static-site
        ---
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: static-site
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: static-site
          template:
            metadata:
              labels:
                app: static-site
            spec:
              containers:
              - name: static-site
                image: registry.example.com/leslitagordita/hugo-site:v10
                ports:
                - containerPort: 80
              imagePullSecrets:
              - name: regcred
              
        • In the Deployment section of the manifest, the imagePullSecrets field references the secret you created in the Grant your Cluster Access to your Docker Registry section. This secret contains the authentication credentials that your cluster’s kubelet can use to pull your private registry’s image.
        • The image field provides the image to pull from your Docker registry.
      3. Create the deployment.

        kubectl create -f static-site-test.yaml
        
      4. Open a browser and navigate to your site’s domain and view the example static site. Using our example, you would navigate to static.example.com. The example Hugo site should load.

      (Optional) Tear Down your Kubernetes Cluster

      To avoid being further billed for your Kubernetes cluster and NodeBlancer, delete your cluster using the Linode Cloud Manager. Similarly, to avoid being further billed for our registry’s Object Storage bucket, follow the steps in the cancel the Object Storage service on your account section of our How to Use Object Storage guide.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Deploy and Manage a Cluster with Linode Kubernetes Engine – A Tutorial


      Updated by Linode Contributed by Linode

      Note

      Linode Kubernetes Engine (LKE) is currently in Private Beta, and you may not have access to LKE through the Cloud Manager or other tools. To request access to the Private Beta, sign up here. Beta access awards you $100/month in free credits for the duration of the beta, which is automatically applied to your account when an LKE cluster is in use. Additionally, you will have access to the Linode Green Light community, a new program connecting beta users with our product and engineering teams.

      Additionally, because LKE is in Beta, there may be breaking changes to how you access and manage LKE. This guide will be updated to reflect these changes if and when they occur.

      What is the Linode Kubernetes Engine (LKE)

      The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy an LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.

      Additional LKE features

      • etcd Backups : A snapshot of your cluster’s metadata is backed up continuously, so your cluster is automatically restored in the event of a failure.
      • High Availability : All of your control plane components are monitored and will automatically recover if they fail.

      In this Guide

      In this guide you will learn:

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to remove it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account.

      Before You Begin

      Enable Network Helper

      In order to use the Linode Kubernetes Engine, you will need to have Network Helper enabled globally on your account. Network Helper is a Linode-provided service that automatically sets a static network configuration for your Linode when it boots. To enable this global account setting, follow these instructions.

      If you don’t want to use Network Helper on some Linodes that are not part of your LKE clusters, the service can also be disabled on a per-Linode basis; see instructions here.

      Note

      If you have already deployed an LKE cluster and did not enable Network Helper, you can add a new node pool with the same type, size, and count as your initial node pool. Once your new node pool is ready, you can then delete the original node pool.

      Install kubectl

      You will need to install the kubectl client to your computer before proceeding. Follow the steps corresponding to your computer’s operating system.

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest kubectl release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Create an LKE Cluster

      1. Log into your Linode Cloud Manager account.

        Note

        LKE is not available in the Linode Classic Manager

      2. From the Linode dashboard, click the Create button in the top left-hand side of the screen and select Kubernetes from the dropdown menu.

        Create a Kubernetes Cluster Screen

      3. The Create a Kubernetes Cluster page will appear. Select the region where you would like your cluster to reside.

        Select your cluster's region

      4. In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster. If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.

        Select your cluster's resources

      5. Under Number of Linodes, input the number of Linode worker nodes you would like to add to your Node Pool. These worker nodes will have the hardware resources selected from the Add Node Pools section.

        Select the number of Linode worker nodes

      6. Click on the Add Node Pool button to add the pool to your cluster’s configuration. You will see a Cluster Summary appear on the right-hand side of the Cloud Manager detailing your cluster’s hardware resources and monthly cost.

        A list of pools also appears below the Add Node Pool button with quick edit Node Count fields. You can easily change the number of nodes by typing a new number in the field, or use the up and down arrows to increment or decrement the number in the field. Each row in this table also has a Remove link if you want to remove the node pool.

        Add a node pool to your Kubernetes cluster

      7. In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account. This name will be how you identify your cluster in the Cloud Manager’s Dashboard.

        Provide a name for your cluster

      8. From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.

        Select a Kubernetes version

      9. When you are satisfied with the configuration of your cluster, click the Create button on the right hand side of the screen. Your cluster’s detail page will appear where you will see your Node Pools listed. From this page, you can edit your existing Node Pools, add new Node Pools to your cluster, access your Kubeconfig file, and view an overview of your cluster’s resource details.

      Connect to your LKE Cluster with kubectl

      After you’ve created your LKE cluster using the Cloud Manager, you can begin interacting with and managing your cluster. You connect to it using the kubectl client on your computer. To configure kubectl, you’ll download your cluster’s kubeconfig file.

      Access and Download your kubeconfig

      Anytime after your cluster is created you can download its kubeconfig. The kubeconfig is a YAML file that will allow you to use kubectl to communicate with your cluster. Here is an example kubeconfig file:

      example-cluster-kubeconfig.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUd...
          server: https://192.0.2.0:6443
        name: kubernetes
      contexts:
      - context:
          cluster: kubernetes
          user: kubernetes-admin
        name: [email protected]
      current-context: [email protected]
      kind: Config
      preferences: {}
      users:
      - name: kubernetes-admin
        user:
          client-certificate-data: LS0tLS1CRUd...
          client-key-data: LS0tLS1CRUd...

      This configuration file defines your cluster, users, and contexts.

      1. To access your cluster’s kubeconfig, log into your Cloud Manager account and navigate to the Kubernetes section.

      2. From the Kubernetes listing page, click on your cluster’s more options ellipsis and select Download kubeconfig. The file will be saved to your computer’s Downloads folder.

        Download your cluster's kubeconfig

        Download and view your Kubeconfig from the cluster’s details page

        You can also download the kubeconfig from the Kubernetes cluster’s details page.

        1. When viewing the Kubernetes listing page, click on the cluster for which you’d like to download a kubeconfig file.

        2. On the cluster’s details page, under the kubeconfig section, click the Download button. The file will be saved to your Downloads folder.

          Kubernetes Cluster Download kubeconfig from Details Page

        3. To view the contents of your kubeconfig file, click on the View button. A pane will appear with the contents of your cluster’s kubeconfig file.

          View the contents of your kubeconfig file

      3. Open a terminal shell and save your kubeconfig file’s path to the $KUBECONFIG environment variable. In the example command, the kubeconfig file is located in the Downloads folder, but you should alter this line with this folder’s location on your computer:

        export KUBECONFIG=~/Downloads/kubeconfig.yaml
        

        Note

        It is common practice to store your kubeconfig files in ~/.kube directory. By default, kubectl will search for a kubeconfig file named config that is located in the ~/.kube directory. You can specify other kubeconfig files by setting the $KUBECONFIG environment variable, as done in the step above.

      4. View your cluster’s nodes using kubectl.

        kubectl get nodes
        

        Note

        If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. Visit our Troubleshooting Kubernetes guide to learn how to switch cluster contexts.

        You are now ready to manage your cluster using kubectl. For more information about using kubectl, see Kubernetes’ Overview of kubectl guide.

      Persist the Kubeconfig Context

      If you create a new terminal window, it will not have access to the context that you specified using the previous instructions. This context information can be made persistent between new terminals by setting the KUBECONFIG environment variable in your shell’s configuration file.

      Note

      These instructions will persist the context for users of the Bash terminal. They will be similar for users of other terminals:

      1. Navigate to the $HOME/.kube directory:

        cd $HOME/.kube
        
      2. Create a directory called configs within $HOME/.kube. You can use this directory to store your kubeconfig files.

        mkdir configs
        
      3. Copy your kubeconfig.yaml file to the $HOME/.kube/configs directory.

        cp ~/Downloads/kubeconfig.yaml $HOME/.kube/configs/kubeconfig.yaml
        

        Note

        Alter the above line with the location of the Downloads folder on your computer.

        Optionally, you can give the copied file a different name to help distinguish it from other files in the configs directory.

      4. Open up your Bash profile (e.g. ~/.bash_profile) in the text editor of your choice and add your configuration file to the $KUBECONFIG PATH variable.

        If an export KUBECONFIG line is already present in the file, append to the end of this line as follows; if it is not present, add this line to the end of your file:

        export KUBECONFIG:$KUBECONFIG:$HOME/.kube/config:$HOME/.kube/configs/kubeconfig.yaml
        
      5. Close your terminal window and open a new window to receive the changes to the $KUBECONFIG variable.

      6. Use the config get-contexts command for kubectl to view the available cluster contexts:

        kubectl config get-contexts
        

        You should see output similar to the following:

          
        CURRENT  NAME                         CLUSTER     AUTHINFO          NAMESPACE
        *        [email protected]  kubernetes  kubernetes-admin
        
        
      7. If your context is not already selected, (denoted by an asterisk in the current column), switch to this context using the config use-context command. Supply the full name of the cluster (including the authorized user and the cluster):

        kubectl config use-context [email protected]
        

        You should see output like the following:

          
        Switched to context "[email protected]".
        
        
      8. You are now ready to interact with your cluster using kubectl. You can test the ability to interact with the cluster by retrieving a list of Pods in the kube-system namespace:

        kubectl get pods -n kube-system
        

      Modify a Cluster’s Node Pools

      You can use the Linode Cloud Manager to modify a cluster’s existing node pools by adding or removing nodes. You can also add or remove entire node pools from your cluster. This section will cover completing those tasks. For any other changes to your LKE cluster, you should use kubectl.

      Access your Cluster’s Details Page

      1. Click the Kubernetes link in the sidebar. The Kubernetes listing page will appear and you will see all your clusters listed.

        Kubernetes cluster listing page

      2. Click the cluster that you wish to modify. The Kubernetes cluster’s details page will appear.

        Kubernetes cluster's details page

      Edit or Remove Existing Node Pools

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, you can now edit your existing node pool or remove it entirely:

        • The Node Count fields are now editable text boxes.

        • To remove a node pool, click the Remove link to the right.

        • As you make changes you will see an Updated Monthly Estimate; contrast this to the current Monthly Pricing under the Details panel on the right.

          Edit your cluster's node pool

      3. Click the Save button to save your changes; click the Clear Changes button to revert back to the cluster state before you started editing; or click the Cancel button to cancel editing.

      Add Node Pools

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, navigate to the Add Node Pools panel. Select the type and size of Linode(s) you want to add to your new pool.

        Select a plan size for your new node pool

      3. Under Number of Linodes, input the number of Linode worker nodes you’d like to add to the pool in the text box; you can also use the arrow keys to increment or decrement this number. Click the Add Node Pool button.

        Add a new node pool to your cluster

      4. The new node pool appears in the Node Pools list which you can now edit, if desired.

        Kubernetes Cluster New Node Pool Created

      Delete a Cluster

      You can delete an entire cluster using the Linode Cloud Manager. These changes cannot be reverted once completed.

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, scroll to the bottom and click on the Delete Cluster button.

        Delete your LKE cluster

      3. A confirmation pop-up will appear. Enter in your cluster’s name and click the Delete button to confirm.

        Kubernetes Delete Confirmation Dialog

      4. The Kubernetes listing page will appear and you will no longer see your deleted cluster.

      Next Steps

      Now that you have a running LKE cluster, you can start deploying workloads to it. Refer to our other guides to learn more:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link