One place for hosting & domains

      Deploy

      Deploy NodeBalancers with the Linode Cloud Controller Manager


      Updated by Linode Written by Linode Community

      The Linode Cloud Controller Manager (CCM) allows Kubernetes to deploy Linode NodeBalancers whenever a Service of the “LoadBalancer” type is created. This provides the Kubernetes cluster with a reliable way of exposing resources to the public internet. The CCM handles the creation and deletion of the NodeBalancer, and correctly identifies the resources, and their networking, the NodeBalancer will service.

      This guide will explain how to:

      • Create a service with the type “LoadBalancer.”
      • Use annotations to control the functionality of the NodeBalancer.
      • Use the NodeBalancer to terminate TLS encryption.

      Caution

      Using the Linode Cloud Controller Manager to create NodeBalancers will create billable resources on your Linode account. A NodeBalancer costs $10 a month. Be sure to follow the instructions at the end of the guide if you would like to delete these resources from your account.

      Before You Begin

      You should have a working knowledge of Kubernetes and familiarity with the kubcetl command line tool before attempting the instructions found in this guide. For more information about Kubernetes, consult our Kubernetes Beginner’s Guide and our Getting Started with Kubernetes guide.

      When using the CCM for the first time, it’s highly suggested that you create a new Kubernetes cluster, as there are a number of issues that prevent the CCM from running on Nodes that are in the “Ready” state. For a completely automated install, you can use the Linode CLI’s k8s-alpha command line tool. The Linode CLI’s k8s-alpha command line tool utilizes Terraform to fully bootstrap a Kubernetes cluster on Linode. It includes the Linode Container Storage Interface (CSI) Driver plugin, the Linode CCM plugin, and the ExternalDNS plugin. For more information on creating a Kubernetes cluster with the Linode CLI, review our How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide.

      Note

      To manually add the Linode CCM to your cluster, you must start kubelet with the --cloud-provider=external flag. kube-apiserver and kube-controller-manager must NOT supply the --cloud-provider flag. For more information, visit the upstream Cloud Controller documentation.

      If you’d like to add the CCM to a cluster by hand, and you are using macOS, you can use the generate-manifest.sh file in the deploy folder of the CCM repository to generate a CCM manifest file that you can later apply to your cluster. Use the following command:

      ./generate-manifest.sh $LINODE_API_TOKEN us-east
      

      Be sure to replace $LINODE_API_TOKEN with a valid Linode API token, and replace us-east with the region of your choosing.

      To view a list of regions, you can use the Linode CLI, or you can view the Regions API endpoint.

      If you are not using macOS, you can copy the ccm-linode-template.yaml file and change the values of the data.apiToken and data.region fields manually.

      Using the CCM

      To use the CCM, you must have a collection of Pods that need to be load balanced, usually from a Deployment. For this example, you will create a Deployment that deploys three NGINX Pods, and then create a Service to expose those Pods to the internet using the Linode CCM.

      1. Create a Deployment manifest describing the desired state of the three replica NGINX containers:

        nginx-deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: nginx-deployment
          labels:
            app: nginx
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: nginx
          template:
            metadata:
              labels:
                app: nginx
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
      2. Use the create command to apply the manifest:

        kubectl create -f nginx-deployment.yaml
        
      3. Create a Service for the Deployment:

        nginx-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: Service
        metadata:
          name: nginx-service
          annotations:
            service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          labels:
            app: nginx
        spec:
          type: LoadBalancer
          ports:
          - name: http
            port: 80
            protocol: TCP
            targetPort: 80
          selector:
            app: nginx
          sessionAffinity: None

        The above Service manifest includes a few key concepts.

        • The first is the spec.type of LoadBalancer. This LoadBalancer type is responsible for telling the Linode CCM to create a Linode NodeBalancer, and will provide the Deployment it services a public facing IP address with which to access the NGINX Pods.
        • There is additional information being passed to the CCM in the form of metadata annotations (service.beta.kubernetes.io/linode-loadbalancer-throttle in the example above), which are discussed in the next section.
      4. Use the create command to create the Service, and in turn, the NodeBalancer:

        kubectl create -f nginx-service.yaml
        

      You can log in to the Linode Cloud Manager to view your newly created NodeBalancer.

      Annotations

      There are a number of settings, called annotations, that you can use to further customize the functionality of your NodeBalancer. Each annotation should be included in the annotations section of the Service manifest file’s metadata, and all of the annotations are prefixed with service.beta.kubernetes.io/linode-loadbalancer-.

      Annotation (suffix) Values Default Value Description
      throttle 020 (0 disables the throttle) 20 Client Connection Throttle. This limits the number of new connections-per-second from the same client IP.
      protocol tcp, http, https tcp Specifies the protocol for the NodeBalancer.
      tls Example value: [ { "tls-secret-name": "prod-app-tls", "port": 443} ] None A JSON array (formatted as a string) that specifies which ports use TLS and their corresponding secrets. The secret type should be kubernetes.io/tls. Fore more information, see the TLS Encryption section.
      check-type none, connection, http, http_body None The type of health check to perform on Nodes to ensure that they are serving requests. connection checks for a valid TCP handshake, http checks for a 2xx or 3xx response code, http_body checks for a certain string within the response body of the healthcheck URL.
      check-path string None The URL path that the NodeBalancer will use to check on the health of the back-end Nodes.
      check-body string None The text that must be present in the body of the page used for health checks. For use with a check-type of http_body.
      check-interval integer None The duration, in seconds, between health checks.
      check-timeout integer (a value between 130) None Duration, in seconds, to wait for a health check to succeed before it is considered a failure.
      check-attempts integer (a value between 130) None Number of health checks to perform before removing a back-end Node from service.
      check-passive boolean false When true, 5xx status codes will cause the health check to fail.

      To learn more about checks, please see our reference guide to NodeBalancer health checks.

      TLS Encryption

      This section will describe how to set up TLS termination for a Service so that the Service can be accessed over https.

      Generating a TLS type Secret

      Kubernetes allows you to store secret information in a Secret object for use within your cluster. This is useful for storing things like passwords and API tokens. In the context of the Linode CCM, Secrets are useful for storing Transport Layer Security (TLS) certificates and keys. The linode-loadbalancer-tls annotation requires TLS certificates and keys to be stored as Kubernetes Secrets with the type of tls. Follow the next steps to create a valid tls type Secret:

      1. Generate a TLS key and certificate using a TLS toolkit like OpenSSL. Be sure to change the CN and O values to those of your own website domain.

        openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout key.pem -out cert.crt -subj "/CN=mywebsite.com/O=mywebsite.com"
        
      2. To create the secret, you can issue the create secret tls command, being sure to substitute $SECRET_NAME for the name you’d like to give to your secret. This will be how you reference the secret in your Service manifest.

        kubectl create secret tls $SECRET_NAME --key key.pem --cert cert.crt
        
      3. You can check to make sure your Secret has been successfully stored by using describe:

        kubectl describe secret $SECRET_NAME
        

        You should see output like the following:

          
        kubectl describe secret docteamdemosite
        Name:         my-secret
        Namespace:    default
        Labels:       
        Annotations:  
        
        Type:  kubernetes.io/tls
        
        Data
        ====
        tls.crt:  1164 bytes
        tls.key:  1704 bytes
        
        

        If your key is not formatted correctly you’ll receive an error stating that there is no PEM formatted data within the key file.

      Defining TLS within a Service

      In order to use https you’ll need to instruct the Service to use the correct port through the proper annotations. Take the following code snippet as an example:

      nginx-serivce.yaml
      1
      2
      3
      4
      5
      6
      7
      
      ...
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
      ...

      The linode-loadbalancer-protocol annotation identifies the https protocol. Then, the linode-loadbalancer-tls annotation defines which Secret and port to use for serving https traffic. If you have multiple Secrets and ports for different environments (testing, staging, etc.), you can define more than one secret and port pair:

      nginx-service-two-environments.yaml
      1
      2
      3
      4
      
      ...
          service.beta.kubernetes.io/linode-loadbalancer-tls: |
            [ { "tls-secret-name": "my-secret", "port": 443 }. {"tls-secret-name": "my-secret-staging", "port": 8443} ]'
      ...

      Next, you’ll need to set up your Service to expose the https port. The whole example might look like the following:

      nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      
      apiVersion: v1
      kind: Service
      metadata:
        annotations:
          service.beta.kubernetes.io/linode-loadbalancer-protocol: https
          service.beta.kubernetes.io/linode-loadbalancer-throttle: "4"
          service.beta.kubernetes.io/linode-loadbalancer-tls: '[ { "tls-secret-name": "my-secret",
            "port": 443 } ]'
        labels:
          app: nginx
        name: nginx-service
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        selector:
          app: nginx
        type: LoadBalancer

      Note that here the NodeBalancer created by the Service is terminating the TLS encryption and proxying that to port 80 on the NGINX Pod. If you had a Pod that listened on port 443, you would set the targetPort to that value.

      Session Affinity

      kube-proxy will always attempt to proxy traffic to a random backend Pod. To ensure that traffic is directed to the same Pod, you can use the sessionAffinity mechanism. When set to clientIP, sessionAffinity will ensure that all traffic from the same IP will be directed to the same Pod:

      session-affinity.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: nginx-service
        labels:
          app: nginx
      spec:
        type: LoadBalancer
        selector:
          app: nginx
        sessionAffinity: ClientIP
        sessionAffinityConfig:
          clientIP:
            timeoutSeconds: 100

      You can set the timeout for the session by using the spec.sessionAffinityConfig.clientIP.timeoutSeconds field.

      Troubleshooting

      If you are having problems with the CCM, such as the NodeBalancer not being created, you can check the CCM’s error logs. First, you’ll need to find the name of the CCM Pod in the kube-system namespaces:

      kubectl get pods -n kube-system
      

      The Pod will be named ccm-linode- with five random characters at the end, like ccm-linode-jrvj2. Once you have the Pod name, you can view its logs. The --tail=n flag is used to return the last n lines, where n is the number of your choosing. The below example returns the last 100 lines:

      kubectl logs ccm-linode-jrvj2 -n kube-system --tail=100
      

      Note

      Currently the CCM only supports https ports within a manifest’s spec when the linode-loadbalancer-protocol is set to https. For regular http traffic, you’ll need to create an additional Service and NodeBalancer. For example, if you had the following in the Service manifest:

      unsupported-nginx-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      ...
      spec:
        ports:
        - name: https
          port: 443
          protocol: TCP
          targetPort: 80
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
      ...

      The NodeBalancer would not be created and you would find an error similar to the following in your logs:

      ERROR: logging before flag.Parse: E0708 16:57:19.999318       1 service_controller.go:219] error processing service default/nginx-service (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      ERROR: logging before flag.Parse: I0708 16:57:19.999466       1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"default", Name:"nginx-service", UID:"5d1afc22-a1a1-11e9-ad5d-f23c919aa99b", APIVersion:"v1", ResourceVersion:"1248179", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service default/nginx-service: [400] [configs[0].protocol] The SSL private key and SSL certificate must be provided when using 'https'
      

      Removing the http port would allow you to create the NodeBalancer.

      Delete a NodeBalancer

      To delete a NodeBalancer and the Service that it represents, you can use the Service manifest file you used to create the NodeBalancer. Simply use the delete command and supply your file name with the f flag:

      kubectl delete -f nginx-service.yaml
      

      Similarly, you can delete the Service by name:

      kubectl delete service nginx-service
      

      Updating the CCM

      The easiest way to update the Linode CCM is to edit the DaemonSet that creates the Linode CCM Pod. To do so, you can run the edit command.

      kubectl edit ds -n kube-system ccm-linode
      

      The CCM Daemonset manifest will appear in vim. Press i to enter insert mode. Navigate to spec.template.spec.image and change the field’s value to the desired version tag. For instance, if you had the following image:

      image: linode/linode-cloud-controller-manager:v0.2.2
      

      You could update the image to v0.2.3 by changing the image tag:

      image: linode/linode-cloud-controller-manager:v0.2.3
      

      For a complete list of CCM version tags, visit the CCM DockerHub page.

      Caution

      The CCM Daemonset manifest may list latest as the image version tag. This may or may not be pointed at the latest version. To ensure the latest version, it is recommended to first check the CCM DockerHub page, then use the most recent release.

      Press escape to exit insert mode, then type :wq and press enter to save your changes. A new Pod will be created with the new image, and the old Pod will be deleted.

      Next Steps

      To further take advantage of Linode products through Kubernetes, check out our guide on how to use the Linode Container Storage Interface (CSI), which allows you to create persistent volumes backed by Linode Block Storage.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Deploy Istio with Kubernetes


      Updated by Linode Contributed by Linode

      Istio is a service mesh, or a network of microservices, that can handle tasks such as load balancing, service-to-service authentication, monitoring, and more. It does this by deploying sidecar proxies to intercept network data, which causes minimal disruption to your current application.

      The Istio platform provides its own API and feature set to help you run a distributed microservice architecture. You can deploy Istio with few to no code changes to your applications allowing you to harness its power without disrupting your development cycle. In conjunction with Kubernetes, Istio provides you with insights into your cluster leading to more control over your applications.

      In this guide you will complete the following tasks:

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to delete it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

      Before You Begin

      Familiarize yourself with Kubernetes using our series A Beginner’s Guide to Kubernetes and Advantages of Using Kubernetes.

      Create Your Kubernetes Cluster

      There are many ways to create a Kubernetes cluster. This guide will use the Linode k8s-alpha CLI.

      1. To set it up the Linode k8s-alpha CLI, see the How to Deploy Kubernetes on Linode with the k8s-alpha CLI guide and stop before the “Create a Cluster” section.

      2. Now that your Linode K8s-alpha CLI is set up, You are ready to create your Kubernetes cluster. You will need 3 worker nodes and one master for this guide. Create your cluster using the following command:

        linode-cli k8s-alpha create istio-cluster --node-type g6-standard-2 --nodes 3 --master-type g6-standard-2 --region us-east --ssh-public-key $HOME/.ssh/id_rsa.pub
        
      3. After the cluster is created you should see output with a similar success message:

          
        Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
        Switched to context "[email protected]".
        Your cluster has been created and your kubectl context updated.
        
        Try the following command:
        kubectl get pods --all-namespaces
        
        Come hang out with us in #linode on the Kubernetes Slack! http://slack.k8s.io/
        
        
      4. If you visit the Linode Cloud Manager, you will see your newly created cluster nodes on the Linodes listing page.

      Install Helm and Tiller

      Follow the instructions in the How to Install Apps on Kubernetes with Helm guide to install Helm and Tiller on your cluster. Stop before the section on “Using Helm Charts to Install Apps”.

      Install Istio

      • For Linux or macOS users, use curl to pull the Istio project files. Even though you will use Helm charts to deploy Istio to your cluster, pulling the Istio project files will give you access to the sample Bookinfo application that comes bundled with this installation.

        curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.3.3 sh -
        
      • If you are using Windows, you will need to go to Istio’s Github repo to find the download. There you will find the latest releases for Windows, Linux, and macOS.

      Note

      Issuing the curl command will create a new directory, istio-1.3.3, in your current working directory. Ensure you move into the directory where you’d like to store your Istio project files before issuing the curl command.

      Install Helm Charts

      1. Add the Istio Helm repo:

        helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.3.2/charts/
        
      2. Update the helm repo listing:

        helm repo update
        
      3. Verify that you have the repo:

        helm repo list | grep istio.io
        

        The output should be similar to the following:

          
        istio.io	https://storage.googleapis.com/istio-release/releases/1.3.2/charts/
            
        
      4. Install Istio’s Custom Resource Definitions (CRD) with the helm chart. This command also creates a pod namespace called istio-system which you will continue to use for the remainder of this guide.

        helm install --name istio-init --namespace istio-system istio.io/istio-init
        
          
        NAME:   istio-init
        LAST DEPLOYED: Fri Oct 18 10:24:24 2019
        NAMESPACE: istio-system
        STATUS: DEPLOYED
        
        RESOURCES:
        ==> v1/ClusterRole
        NAME                     AGE
        istio-init-istio-system  0s
        
        ==> v1/ClusterRoleBinding
        NAME                                        AGE
        istio-init-admin-role-binding-istio-system  0s
        
        ==> v1/ConfigMap
        NAME          DATA  AGE
        istio-crd-10  1     0s
        istio-crd-11  1     0s
        istio-crd-12  1     0s
        
        ==> v1/Job
        NAME                     COMPLETIONS  DURATION  AGE
        istio-init-crd-10-1.3.2  0/1          0s        0s
        istio-init-crd-11-1.3.2  0/1          0s        0s
        istio-init-crd-12-1.3.2  0/1          0s        0s
        
        ==> v1/Pod(related)
        NAME                           READY  STATUS             RESTARTS  AGE
        istio-init-crd-10-1.3.2-d4gdf  0/1    ContainerCreating  0         0s
        istio-init-crd-11-1.3.2-h8l58  0/1    ContainerCreating  0         0s
        istio-init-crd-12-1.3.2-v9777  0/1    ContainerCreating  0         0s
        
        ==> v1/ServiceAccount
        NAME                        SECRETS  AGE
        istio-init-service-account  1        0s
        
        
      5. Verify that all CRDs were successfully installed:

        kubectl get crds | grep 'istio.io' | wc -l
        

        You should see the following output:

          
        23
        
        

        If the number is less, you may need to wait a few moments for the resources to finish being created.

      6. Install the Helm chart for Istio. There are many installation options available for Istio. For this guide, the command enables Grafana, which you will use later to visualize your cluster’s data.

        helm install --name istio --namespace istio-system istio.io/istio --set grafana.enabled=true
        

        Full output of the Helm chart Istio installation

          
        NAME:   istio
        LAST DEPLOYED: Fri Oct 18 10:28:40 2019
        NAMESPACE: istio-system
        STATUS: DEPLOYED
        
        RESOURCES:
        ==> v1/ClusterRole
        NAME                                     AGE
        istio-citadel-istio-system               43s
        istio-galley-istio-system                43s
        istio-grafana-post-install-istio-system  43s
        istio-mixer-istio-system                 43s
        istio-pilot-istio-system                 43s
        istio-reader                             43s
        istio-sidecar-injector-istio-system      43s
        prometheus-istio-system                  43s
        
        ==> v1/ClusterRoleBinding
        NAME                                                    AGE
        istio-citadel-istio-system                              43s
        istio-galley-admin-role-binding-istio-system            43s
        istio-grafana-post-install-role-binding-istio-system    43s
        istio-mixer-admin-role-binding-istio-system             43s
        istio-multi                                             43s
        istio-pilot-istio-system                                43s
        istio-sidecar-injector-admin-role-binding-istio-system  43s
        prometheus-istio-system                                 43s
        
        ==> v1/ConfigMap
        NAME                                                                DATA  AGE
        istio                                                               2     43s
        istio-galley-configuration                                          1     44s
        istio-grafana                                                       2     43s
        istio-grafana-configuration-dashboards-citadel-dashboard            1     44s
        istio-grafana-configuration-dashboards-galley-dashboard             1     43s
        istio-grafana-configuration-dashboards-istio-mesh-dashboard         1     44s
        istio-grafana-configuration-dashboards-istio-performance-dashboard  1     43s
        istio-grafana-configuration-dashboards-istio-service-dashboard      1     44s
        istio-grafana-configuration-dashboards-istio-workload-dashboard     1     44s
        istio-grafana-configuration-dashboards-mixer-dashboard              1     44s
        istio-grafana-configuration-dashboards-pilot-dashboard              1     44s
        istio-grafana-custom-resources                                      2     44s
        istio-security-custom-resources                                     2     43s
        istio-sidecar-injector                                              2     43s
        prometheus                                                          1     43s
        
        ==> v1/Deployment
        NAME                    READY  UP-TO-DATE  AVAILABLE  AGE
        grafana                 0/1    1           0          42s
        istio-citadel           1/1    1           1          42s
        istio-galley            0/1    1           0          42s
        istio-ingressgateway    0/1    1           0          42s
        istio-pilot             0/1    1           0          42s
        istio-policy            0/1    1           0          42s
        istio-sidecar-injector  0/1    1           0          42s
        istio-telemetry         1/1    1           1          42s
        prometheus              0/1    1           0          42s
        
        ==> v1/Pod(related)
        NAME                                     READY  STATUS             RESTARTS  AGE
        grafana-575c7c4784-ffq79                 0/1    ContainerCreating  0         42s
        istio-citadel-746b4cc66c-2zq2d           1/1    Running            0         42s
        istio-galley-668765c7dc-r7w49            0/1    ContainerCreating  0         42s
        istio-ingressgateway-76ff5cf54b-n5xzl    0/1    Running            0         42s
        istio-pilot-7b6f4b4498-pfcm5             0/2    ContainerCreating  0         42s
        istio-policy-8449665784-xzn7m            0/2    ContainerCreating  0         42s
        istio-sidecar-injector-7488c45bcb-mzfgz  0/1    Running            0         42s
        istio-telemetry-56595ccd89-qxtb7         2/2    Running            1         42s
        prometheus-5679cb4dcd-8fsf4              0/1    ContainerCreating  0         42s
        
        ==> v1/Role
        NAME                      AGE
        istio-ingressgateway-sds  43s
        
        ==> v1/RoleBinding
        NAME                      AGE
        istio-ingressgateway-sds  43s
        
        ==> v1/Service
        NAME                    TYPE          CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                                                                                                     AGE
        grafana                 ClusterIP     10.111.223.85              3000/TCP                                                                                                                                    43s
        istio-citadel           ClusterIP     10.96.57.68                8060/TCP,15014/TCP                                                                                                                          42s
        istio-galley            ClusterIP     10.111.114.219             443/TCP,15014/TCP,9901/TCP                                                                                                                  43s
        istio-ingressgateway    LoadBalancer  10.104.28.12    104.237.148.149  15020:31189/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30450/TCP,15030:32554/TCP,15031:30659/TCP,15032:32716/TCP,15443:32438/TCP  43s
        istio-pilot             ClusterIP     10.97.46.215               15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                      42s
        istio-policy            ClusterIP     10.104.45.158              9091/TCP,15004/TCP,15014/TCP                                                                                                                42s
        istio-sidecar-injector  ClusterIP     10.110.88.188              443/TCP,15014/TCP                                                                                                                           42s
        istio-telemetry         ClusterIP     10.103.18.40               9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                      42s
        prometheus              ClusterIP     10.105.19.61               9090/TCP                                                                                                                                    42s
        
        ==> v1/ServiceAccount
        NAME                                    SECRETS  AGE
        istio-citadel-service-account           1        43s
        istio-galley-service-account            1        43s
        istio-grafana-post-install-account      1        43s
        istio-ingressgateway-service-account    1        43s
        istio-mixer-service-account             1        43s
        istio-multi                             1        43s
        istio-pilot-service-account             1        43s
        istio-security-post-install-account     1        43s
        istio-sidecar-injector-service-account  1        43s
        prometheus                              1        43s
        
        ==> v1alpha2/attributemanifest
        NAME        AGE
        istioproxy  41s
        kubernetes  41s
        
        ==> v1alpha2/handler
        NAME           AGE
        kubernetesenv  41s
        prometheus     41s
        
        ==> v1alpha2/instance
        NAME                  AGE
        attributes            41s
        requestcount          41s
        requestduration       41s
        requestsize           41s
        responsesize          41s
        tcpbytereceived       41s
        tcpbytesent           41s
        tcpconnectionsclosed  41s
        tcpconnectionsopened  41s
        
        ==> v1alpha2/rule
        NAME                     AGE
        kubeattrgenrulerule      41s
        promhttp                 41s
        promtcp                  41s
        promtcpconnectionclosed  41s
        promtcpconnectionopen    41s
        tcpkubeattrgenrulerule   41s
        
        ==> v1alpha3/DestinationRule
        NAME             AGE
        istio-policy     42s
        istio-telemetry  42s
        
        ==> v1beta1/ClusterRole
        NAME                                      AGE
        istio-security-post-install-istio-system  43s
        
        ==> v1beta1/ClusterRoleBinding
        NAME                                                   AGE
        istio-security-post-install-role-binding-istio-system  43s
        
        ==> v1beta1/MutatingWebhookConfiguration
        NAME                    AGE
        istio-sidecar-injector  41s
        
        ==> v1beta1/PodDisruptionBudget
        NAME                    MIN AVAILABLE  MAX UNAVAILABLE  ALLOWED DISRUPTIONS  AGE
        istio-galley            1              N/A              0                    44s
        istio-ingressgateway    1              N/A              0                    44s
        istio-pilot             1              N/A              0                    44s
        istio-policy            1              N/A              0                    44s
        istio-sidecar-injector  1              N/A              0                    44s
        istio-telemetry         1              N/A              0                    44s
        
        ==> v2beta1/HorizontalPodAutoscaler
        NAME                  REFERENCE                        TARGETS        MINPODS  MAXPODS  REPLICAS  AGE
        istio-ingressgateway  Deployment/istio-ingressgateway  /80%  1        5        1         42s
        istio-pilot           Deployment/istio-pilot           /80%  1        5        1         41s
        istio-policy          Deployment/istio-policy          /80%  1        5        1         42s
        istio-telemetry       Deployment/istio-telemetry       /80%  1        5        1         42s
        
        
        NOTES:
        Thank you for installing Istio.
        
        Your release is named Istio.
        
        To get started running application with Istio, execute the following steps:
        1. Label namespace that application object will be deployed to by the following command (take default namespace as an example)
        
        $ kubectl label namespace default istio-injection=enabled
        $ kubectl get namespace -L istio-injection
        
        2. Deploy your applications
        
        $ kubectl apply -f .yaml
        
        For more information on running Istio, visit:
        https://istio.io/
        
        
        
      7. Verify that the Istio services and Grafana are running:

        kubectl get svc -n istio-system
        

        The output should be similar to the following:

          
        NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                                                                                                                      AGE
        grafana                  ClusterIP      10.111.81.20              3000/TCP                                                                                                                                     4m6s
        istio-citadel            ClusterIP      10.100.103.171            8060/TCP,15014/TCP                                                                                                                           4m6s
        istio-galley             ClusterIP      10.104.173.105            443/TCP,15014/TCP,9901/TCP                                                                                                                   4m7s
        istio-ingressgateway     LoadBalancer   10.97.218.128    23.92.23.198   15020:30376/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31358/TCP,15030:30826/TCP,15031:30535/TCP,15032:31728/TCP,15443:31970/TCP   4m6s
        istio-pilot              ClusterIP      10.108.36.63              15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                       4m6s
        istio-policy             ClusterIP      10.111.111.45             9091/TCP,15004/TCP,15014/TCP                                                                                                                 4m6s
        istio-sidecar-injector   ClusterIP      10.96.23.143              443/TCP,15014/TCP                                                                                                                            4m5s
        istio-telemetry          ClusterIP      10.103.224.18             9091/TCP,15004/TCP,15014/TCP,42422/TCP                                                                                                       4m6s
        prometheus               ClusterIP      10.96.246.56              9090/TCP                                                                                                                                     4m6s
        
        
      8. You can also see the pods that are running by using this command:

        kubectl get pods -n istio-system
        

        The output will look similar to this:

          
        NAME                                      READY   STATUS      RESTARTS   AGE
        grafana-575c7c4784-v2vj2                  1/1     Running     0          4m54s
        istio-citadel-746b4cc66c-jnjx9            1/1     Running     0          4m53s
        istio-galley-668765c7dc-vt88j             1/1     Running     0          4m54s
        istio-ingressgateway-76ff5cf54b-dmksf     1/1     Running     0          4m54s
        istio-init-crd-10-1.3.2-t4sqg             0/1     Completed   0          11m
        istio-init-crd-11-1.3.2-glr72             0/1     Completed   0          11m
        istio-init-crd-12-1.3.2-82gn4             0/1     Completed   0          11m
        istio-pilot-7b6f4b4498-vtb8s              2/2     Running     0          4m53s
        istio-policy-8449665784-8hjsw             2/2     Running     4          4m54s
        istio-sidecar-injector-7488c45bcb-b4qz4   1/1     Running     0          4m53s
        istio-telemetry-56595ccd89-jcc9s          2/2     Running     5          4m54s
        prometheus-5679cb4dcd-pbg6m               1/1     Running     0          4m53s
        
        
      9. Before moving on, be sure that all pods are in the Running or Completed status.

        Note

        If you need to troubleshoot, you can check a specific pod by using kubectl, remembering that you set the namespace to istio-system:

        kubectl describe pods pod_name -n pod_namespace
        

        And check the logs by using:

        kubectl logs pod_name -n pod_namespace
        

      Set up Envoy Proxies

      1. Istio’s service mesh runs by employing sidecar proxies. You will enable them by injecting them into the containers. This command is using the default namespace which is where you will be deploying the Bookinfo application.

        kubectl label namespace default istio-injection=enabled
        

        Note

        This deployment uses automatic sidecar injection. Automatic injection can be disabled and manual injection enabled during installation via istioctl. If you disabled automatic injection during installation, use the following command to modify the bookinfo.yaml file before deploying the application:

        kubectl apply -f <(istioctl kube-inject -f ~/istio-1.3.3/samples/bookinfo/platform/kube/bookinfo.yaml)
        
      2. Verify that the ISTIO-INJECTION was enabled for the default namespace:

        kubectl get namespace -L istio-injection
        

        You will get a similar output:

          
        NAME           STATUS   AGE    ISTIO-INJECTION
        default        Active   101m   enabled
        istio-system   Active   37m
        kube-public    Active   101m
        kube-system    Active   101m
        
        

      Install the Istio Bookinfo App

      The Bookinfo app is a sample application that comes packaged with Istio. It features four microservices in four different languages that are all separate from Istio itself. The application is a simple single page website that displays a “book store” catalog page with one book, it’s details, and some reviews. The microservices are:

      • productpage is written in Python and calls details and reviews to populate the page.
      • details is written in Ruby and contains the book information.
      • reviews is written in Java and contains book reviews and calls ratings.
      • ratings is written in Node.js and contains book ratings. There are three versions of this microservice in the application. A different version is called each time the page is refreshed.
      1. Navigate to the directory where you installed Istio.

      2. The bookinfo.yaml file is the application manifest. It specifies all the service and deployment objects for the application. Here is just the productpage section of this file; feel free to browse the entire file:

        ~/istio-1.3.3/samples/bookinfo/platform/kube/bookinfo.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        
        ...
        
        apiVersion: v1
        kind: Service
        metadata:
          name: productpage
          labels:
            app: productpage
            service: productpage
        spec:
          ports:
          - port: 9080
            name: http
          selector:
            app: productpage
        ---
        apiVersion: v1
        kind: ServiceAccount
        metadata:
          name: bookinfo-productpage
        ---
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: productpage-v1
          labels:
            app: productpage
            version: v1
        spec:
          replicas: 1
          selector:
            matchLabels:
              app: productpage
              version: v1
          template:
            metadata:
              labels:
                app: productpage
                version: v1
            spec:
              serviceAccountName: bookinfo-productpage
              containers:
              - name: productpage
                image: docker.io/istio/examples-bookinfo-productpage-v1:1.15.0
                imagePullPolicy: IfNotPresent
                ports:
                - containerPort: 9080
        ---
      3. Start the Bookinginfo application with the following command:

        kubectl apply -f ~/istio-1.3.3/samples/bookinfo/platform/kube/bookinfo.yaml
        

        The following output results:

          
        service/details created
        serviceaccount/bookinfo-details created
        deployment.apps/details-v1 created
        service/ratings created
        serviceaccount/bookinfo-ratings created
        deployment.apps/ratings-v1 created
        service/reviews created
        serviceaccount/bookinfo-reviews created
        deployment.apps/reviews-v1 created
        jdeployment.apps/reviews-v2 created
        deployment.apps/reviews-v3 created
        service/productpage created
        serviceaccount/bookinfo-productpage created
        deployment.apps/productpage-v1 created
        
        
      4. Check that all the services are up and running:

        kubectl get services
        

        The output will look similar to the following:

          
            NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
        details       ClusterIP   10.97.188.175           9080/TCP   3m
        kubernetes    ClusterIP   10.96.0.1               443/TCP    154m
        productpage   ClusterIP   10.110.184.42           9080/TCP   2m37s
        ratings       ClusterIP   10.102.206.99           9080/TCP   2m59s
        reviews       ClusterIP   10.106.21.117           9080/TCP   2m59s
        
        
      5. Check that the pods are all up:

        kubectl get pods
        

        The expected output should look similar, with all pods running:

          
        NAME                              READY   STATUS    RESTARTS   AGE
        details-v1-68fbb76fc-qfpbd        2/2     Running   0          4m48s
        productpage-v1-6c6c87ffff-th52x   2/2     Running   0          4m15s
        ratings-v1-7bdfd65ccc-z8grs       2/2     Running   0          4m48s
        reviews-v1-5c5b7b9f8d-6xljj       2/2     Running   0          4m41s
        reviews-v2-569796655b-x2n4v       2/2     Running   0          4m30s
        reviews-v3-844bc59d88-pwl6b       2/2     Running   0          4m30s
        
        

        Note

        If you do not see all pods running right away, you may need to wait a few moments for them to complete the initialization process.

      6. Check that the Bookinfo application is running. This command will pull the title tag and contents from the /productpage running on the ratings pod:

        kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"
        

        The expected output will look like this:

          
        <title>Simple Bookstore App</title>
        
        

      Open the Istio Gateway

      When checking the services in the previous section, you may have noticed none had external IPs. This is because Kubernetes services are private by default. You will need to open a gateway in order to access the app from the web browser. To do this you will use an Istio Gateway.

      Here are the contents of the bookinfo-gateway.yaml file that you will use to open the gateway:

      ~/istio-1.3.3/samples/bookinfo/networking/bookinfo-gateway.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      
      apiVersion: networking.istio.io/v1alpha3
      kind: Gateway
      metadata:
        name: bookinfo-gateway
      spec:
        selector:
          istio: ingressgateway # use istio default controller
        servers:
        - port:
            number: 80
            name: http
            protocol: HTTP
          hosts:
          - "*"
      ---
      apiVersion: networking.istio.io/v1alpha3
      kind: VirtualService
      metadata:
        name: bookinfo
      spec:
        hosts:
        - "*"
        gateways:
        - bookinfo-gateway
        http:
        - match:
          - uri:
              exact: /productpage
          - uri:
              prefix: /static
          - uri:
              exact: /login
          - uri:
              exact: /logout
          - uri:
              prefix: /api/v1/products
          route:
          - destination:
              host: productpage
              port:
                number: 9080
      • The Gateway section sets up the server and specifies the port and protocol that will be opened through the gateway. Note that the name must match Istio’s named service ports standardization scheme.
      • In the Virtual Service section, the http field defines how HTTP traffic will be routed, and the destination field says where requests are routed.
      1. Apply the ingress gateway with the following command:

        kubectl apply -f ~/istio-1.3.3/samples/bookinfo/networking/bookinfo-gateway.yaml
        

        You should see the following output:

          
        gateway.networking.istio.io/bookinfo-gateway created
        virtualservice.networking.istio.io/bookinfo created
        
        
      2. Confirm that the gateway is open:

        kubectl get gateway
        

        You should see the following output:

          
        NAME               AGE
        bookinfo-gateway   1m
        
        
      3. Access your ingress gateway’s external IP. This IP will correspond to the value listed under EXTERNAL-IP.

        kubectl get svc istio-ingressgateway -n istio-system
        

        The output should resemble the following. In the example, the external IP is 192.0.2.0. You will need this IP address in the next section to access your Bookinfo app.

          
        NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                                                                                                                      AGE
        istio-ingressgateway   LoadBalancer   10.97.218.128   192.0.2.0   15020:30376/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31358/TCP,15030:30826/TCP,15031:30535/TCP,15032:31728/TCP,15443:31970/TCP   21h
        
        

      Apply Default Destination Rules

      Destination rules specify named service subsets and give them routing rules to control traffic to the different instances of your services.

      1. Apply destination rules to your cluster:

        kubectl apply -f ~/istio-1.3.3/samples/bookinfo/networking/destination-rule-all.yaml
        

        The output will appear as follows:

          
        destinationrule.networking.istio.io/productpage created
        destinationrule.networking.istio.io/reviews created
        destinationrule.networking.istio.io/ratings created
        destinationrule.networking.istio.io/details created
        
        
      2. To view all the applied rules issue the following command:

        kubectl get destinationrules -o yaml
        

      Visualizations with Grafana

      1. In a new terminal window that you can leave running, open the port for Grafana:

        kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
        
      2. Create an SSH tunnel from your local machine to your Linode so that you can access the localhost of your Linode, entering your credentials as prompted:

        ssh -L 3000:localhost:3000 <username>@<ipaddress>
        

        Once this is completed, visit the following URL in your web browser to access your Mesh Dashboard:

        http://localhost:3000/dashboard/db/istio-mesh-dashboard
        

        Note

        In this example, you will use an SSH tunnel to access your cluster’s running Grafana service. You could set up an ingress gateway for your Grafana service in the same way you did for the Bookinfo app. Those steps are not covered in this guide.

      3. You will see the Mesh Dashboard. There will be no data available yet.

        Istio Dashboard

      4. Send data by visiting a product page, replacing 192.0.2.0 with the value for your ingress gateway’s external IP:

        http://192.0.2.0/productpage
        

        Refresh the page a few times to generate some traffic.

      5. Return to the dashboard and refresh the page to see the data.

        Istio Dashboard Refreshed

        The Mesh Dashboard displays a general overview of Istio service mesh, the services that are running, and their workloads.

      6. To view a specific service or workload you can click on them from the HTTP/GRPC Workloads list. Under the Service column, click productpage.default.svc.cluster.local from the HTTP/GRPC Workloads list.

        Istio Service List Mesh Dashboard

      7. This will open a Service dashboard specific to this service.

        Istio Product Service Detail Dashboard

      8. Feel free to explore the other Grafana dashboards for more metrics and data. You can access all the dashboards from the dropdown menu at the top left of the screen.

      Removing Clusters and Deployments

      If you at any time need to remove the resources created when following this guide, enter the following commands, confirming any prompts that appear:

      helm delete istio-init
      helm delete istio
      linode-cli k8s-alpha delete istio-cluster
      

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Deploy and Manage a Cluster with Linode Kubernetes Engine – A Tutorial


      Updated by Linode Contributed by Linode

      Note

      Linode Kubernetes Engine (LKE) is currently in Private Beta, and you may not have access to LKE through the Cloud Manager or other tools. To request access to the Private Beta, sign up here. Beta access awards you $100/month in free credits for the duration of the beta, which is automatically applied to your account when an LKE cluster is in use. Additionally, you will have access to the Linode Green Light community, a new program connecting beta users with our product and engineering teams.

      Additionally, because LKE is in Beta, there may be breaking changes to how you access and manage LKE. This guide will be updated to reflect these changes if and when they occur.

      What is the Linode Kubernetes Engine (LKE)

      The Linode Kubernetes Engine (LKE) is a fully-managed container orchestration engine for deploying and managing containerized applications and workloads. LKE combines Linode’s ease of use and simple pricing with the infrastructure efficiency of Kubernetes. When you deploy an LKE cluster, you receive a Kubernetes Master at no additional cost; you only pay for the Linodes (worker nodes), NodeBalancers (load balancers), and Block Storage Volumes. Your LKE cluster’s Master node runs the Kubernetes control plane processes – including the API, scheduler, and resource controllers.

      Additional LKE features

      • etcd Backups : A snapshot of your cluster’s metadata is backed up continuously, so your cluster is automatically restored in the event of a failure.
      • High Availability : All of your control plane components are monitored and will automatically recover if they fail.

      In this Guide

      In this guide you will learn:

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to remove it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account.

      Before You Begin

      Enable Network Helper

      In order to use the Linode Kubernetes Engine, you will need to have Network Helper enabled globally on your account. Network Helper is a Linode-provided service that automatically sets a static network configuration for your Linode when it boots. To enable this global account setting, follow these instructions.

      If you don’t want to use Network Helper on some Linodes that are not part of your LKE clusters, the service can also be disabled on a per-Linode basis; see instructions here.

      Note

      If you have already deployed an LKE cluster and did not enable Network Helper, you can add a new node pool with the same type, size, and count as your initial node pool. Once your new node pool is ready, you can then delete the original node pool.

      Install kubectl

      You will need to install the kubectl client to your computer before proceeding. Follow the steps corresponding to your computer’s operating system.

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest kubectl release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Create an LKE Cluster

      1. Log into your Linode Cloud Manager account.

        Note

        LKE is not available in the Linode Classic Manager

      2. From the Linode dashboard, click the Create button in the top left-hand side of the screen and select Kubernetes from the dropdown menu.

        Create a Kubernetes Cluster Screen

      3. The Create a Kubernetes Cluster page will appear. Select the region where you would like your cluster to reside.

        Select your cluster's region

      4. In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster. If you decide that you need more or fewer hardware resources after you deploy your cluster, you can always edit your Node Pool.

        Select your cluster's resources

      5. Under Number of Linodes, input the number of Linode worker nodes you would like to add to your Node Pool. These worker nodes will have the hardware resources selected from the Add Node Pools section.

        Select the number of Linode worker nodes

      6. Click on the Add Node Pool button to add the pool to your cluster’s configuration. You will see a Cluster Summary appear on the right-hand side of the Cloud Manager detailing your cluster’s hardware resources and monthly cost.

        A list of pools also appears below the Add Node Pool button with quick edit Node Count fields. You can easily change the number of nodes by typing a new number in the field, or use the up and down arrows to increment or decrement the number in the field. Each row in this table also has a Remove link if you want to remove the node pool.

        Add a node pool to your Kubernetes cluster

      7. In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account. This name will be how you identify your cluster in the Cloud Manager’s Dashboard.

        Provide a name for your cluster

      8. From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.

        Select a Kubernetes version

      9. When you are satisfied with the configuration of your cluster, click the Create button on the right hand side of the screen. Your cluster’s detail page will appear where you will see your Node Pools listed. From this page, you can edit your existing Node Pools, add new Node Pools to your cluster, access your Kubeconfig file, and view an overview of your cluster’s resource details.

      Connect to your LKE Cluster with kubectl

      After you’ve created your LKE cluster using the Cloud Manager, you can begin interacting with and managing your cluster. You connect to it using the kubectl client on your computer. To configure kubectl, you’ll download your cluster’s kubeconfig file.

      Access and Download your kubeconfig

      Anytime after your cluster is created you can download its kubeconfig. The kubeconfig is a YAML file that will allow you to use kubectl to communicate with your cluster. Here is an example kubeconfig file:

      example-cluster-kubeconfig.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUd...
          server: https://192.0.2.0:6443
        name: kubernetes
      contexts:
      - context:
          cluster: kubernetes
          user: kubernetes-admin
        name: [email protected]
      current-context: [email protected]
      kind: Config
      preferences: {}
      users:
      - name: kubernetes-admin
        user:
          client-certificate-data: LS0tLS1CRUd...
          client-key-data: LS0tLS1CRUd...

      This configuration file defines your cluster, users, and contexts.

      1. To access your cluster’s kubeconfig, log into your Cloud Manager account and navigate to the Kubernetes section.

      2. From the Kubernetes listing page, click on your cluster’s more options ellipsis and select Download kubeconfig. The file will be saved to your computer’s Downloads folder.

        Download your cluster's kubeconfig

        Download and view your Kubeconfig from the cluster’s details page

        You can also download the kubeconfig from the Kubernetes cluster’s details page.

        1. When viewing the Kubernetes listing page, click on the cluster for which you’d like to download a kubeconfig file.

        2. On the cluster’s details page, under the kubeconfig section, click the Download button. The file will be saved to your Downloads folder.

          Kubernetes Cluster Download kubeconfig from Details Page

        3. To view the contents of your kubeconfig file, click on the View button. A pane will appear with the contents of your cluster’s kubeconfig file.

          View the contents of your kubeconfig file

      3. Open a terminal shell and save your kubeconfig file’s path to the $KUBECONFIG environment variable. In the example command, the kubeconfig file is located in the Downloads folder, but you should alter this line with this folder’s location on your computer:

        export KUBECONFIG=~/Downloads/kubeconfig.yaml
        

        Note

        It is common practice to store your kubeconfig files in ~/.kube directory. By default, kubectl will search for a kubeconfig file named config that is located in the ~/.kube directory. You can specify other kubeconfig files by setting the $KUBECONFIG environment variable, as done in the step above.

      4. View your cluster’s nodes using kubectl.

        kubectl get nodes
        

        Note

        If your kubectl commands are not returning the resources and information you expect, then your client may be assigned to the wrong cluster context. Visit our Troubleshooting Kubernetes guide to learn how to switch cluster contexts.

        You are now ready to manage your cluster using kubectl. For more information about using kubectl, see Kubernetes’ Overview of kubectl guide.

      Persist the Kubeconfig Context

      If you create a new terminal window, it will not have access to the context that you specified using the previous instructions. This context information can be made persistent between new terminals by setting the KUBECONFIG environment variable in your shell’s configuration file.

      Note

      These instructions will persist the context for users of the Bash terminal. They will be similar for users of other terminals:

      1. Navigate to the $HOME/.kube directory:

        cd $HOME/.kube
        
      2. Create a directory called configs within $HOME/.kube. You can use this directory to store your kubeconfig files.

        mkdir configs
        
      3. Copy your kubeconfig.yaml file to the $HOME/.kube/configs directory.

        cp ~/Downloads/kubeconfig.yaml $HOME/.kube/configs/kubeconfig.yaml
        

        Note

        Alter the above line with the location of the Downloads folder on your computer.

        Optionally, you can give the copied file a different name to help distinguish it from other files in the configs directory.

      4. Open up your Bash profile (e.g. ~/.bash_profile) in the text editor of your choice and add your configuration file to the $KUBECONFIG PATH variable.

        If an export KUBECONFIG line is already present in the file, append to the end of this line as follows; if it is not present, add this line to the end of your file:

        export KUBECONFIG:$KUBECONFIG:$HOME/.kube/config:$HOME/.kube/configs/kubeconfig.yaml
        
      5. Close your terminal window and open a new window to receive the changes to the $KUBECONFIG variable.

      6. Use the config get-contexts command for kubectl to view the available cluster contexts:

        kubectl config get-contexts
        

        You should see output similar to the following:

          
        CURRENT  NAME                         CLUSTER     AUTHINFO          NAMESPACE
        *        [email protected]  kubernetes  kubernetes-admin
        
        
      7. If your context is not already selected, (denoted by an asterisk in the current column), switch to this context using the config use-context command. Supply the full name of the cluster (including the authorized user and the cluster):

        kubectl config use-context [email protected]
        

        You should see output like the following:

          
        Switched to context "[email protected]".
        
        
      8. You are now ready to interact with your cluster using kubectl. You can test the ability to interact with the cluster by retrieving a list of Pods in the kube-system namespace:

        kubectl get pods -n kube-system
        

      Modify a Cluster’s Node Pools

      You can use the Linode Cloud Manager to modify a cluster’s existing node pools by adding or removing nodes. You can also add or remove entire node pools from your cluster. This section will cover completing those tasks. For any other changes to your LKE cluster, you should use kubectl.

      Access your Cluster’s Details Page

      1. Click the Kubernetes link in the sidebar. The Kubernetes listing page will appear and you will see all your clusters listed.

        Kubernetes cluster listing page

      2. Click the cluster that you wish to modify. The Kubernetes cluster’s details page will appear.

        Kubernetes cluster's details page

      Edit or Remove Existing Node Pools

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, you can now edit your existing node pool or remove it entirely:

        • The Node Count fields are now editable text boxes.

        • To remove a node pool, click the Remove link to the right.

        • As you make changes you will see an Updated Monthly Estimate; contrast this to the current Monthly Pricing under the Details panel on the right.

          Edit your cluster's node pool

      3. Click the Save button to save your changes; click the Clear Changes button to revert back to the cluster state before you started editing; or click the Cancel button to cancel editing.

      Add Node Pools

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, navigate to the Add Node Pools panel. Select the type and size of Linode(s) you want to add to your new pool.

        Select a plan size for your new node pool

      3. Under Number of Linodes, input the number of Linode worker nodes you’d like to add to the pool in the text box; you can also use the arrow keys to increment or decrement this number. Click the Add Node Pool button.

        Add a new node pool to your cluster

      4. The new node pool appears in the Node Pools list which you can now edit, if desired.

        Kubernetes Cluster New Node Pool Created

      Delete a Cluster

      You can delete an entire cluster using the Linode Cloud Manager. These changes cannot be reverted once completed.

      1. On your cluster’s details page, click the Resize tab at the top of the page.

        Access your cluster's resize page

      2. Under the cluster’s Resize tab, scroll to the bottom and click on the Delete Cluster button.

        Delete your LKE cluster

      3. A confirmation pop-up will appear. Enter in your cluster’s name and click the Delete button to confirm.

        Kubernetes Delete Confirmation Dialog

      4. The Kubernetes listing page will appear and you will no longer see your deleted cluster.

      Next Steps

      Now that you have a running LKE cluster, you can start deploying workloads to it. Refer to our other guides to learn more:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link