One place for hosting & domains

      A Beginner's Guide to Kubernetes


      Updated by Linode Contributed by Linode

      Kubernetes, often referred to as k8s, is an open source container orchestration system that helps deploy and manage containerized applications. Developed by Google starting in 2014 and written in the Go language, Kubernetes is quickly becoming the standard way to architect horizontally-scalable applications. This guide will explain the major parts and concepts of Kubernetes.

      Containers

      Kubernetes is a container orchestration tool and, therefore, needs a container runtime installed to work. In practice, the default container runtime for Kubernetes is Docker, though other runtimes like rkt, and LXD will also work. With the advent of the Container Runtime Interface (CRI), which hopes to standardize the way Kubernetes interacts with containers, other options like containerd, cri-o, and Frakti have also become available. This guide assumes you have a working knowledge of containers and the examples will all use Docker as the container runtime.

      Kubernetes API

      Kubernetes is built around a robust RESTful API. Every action taken in Kubernetes, be it inter-component communication or user command, interacts in some fashion with the Kubernetes API. The goal of the API is to help facilitate the desired state of the Kubernetes cluster. If you want X instances of your application running and have Y currently active, the API will take the required steps to get to X, whether this means creating, or destroying resources. To create this desired state, you create objects, which are normally represented by YAML files called manifests, and apply them through the command line with the kubectl tool.

      kubectl

      kubectl is a command line tool used to interact with the Kubernetes cluster. It offers a host of features, including the ability to create, stop, and delete resources, describe active resources, and auto scale resources. For more information on the types of commands and resources you can use with kubectl, consult the Kubernetes kubectl documentation.

      Kubernetes Master, Nodes, and Control Plane

      At the highest level of Kubernetes, there exist two kinds of servers, a Master and a Node. These servers can be Linodes, VMs, or physical servers. Together, these servers form a cluster.

      Nodes

      Kubernetes Nodes are worker servers that run your application. The number of Nodes is determined by the user, and they are created by the user. In addition to running your application, each Node runs two processes:

      • kubelet receives descriptions of the desired state of a Pod from the API server, and ensures the Pod is healthy, and running on the Node.
      • kube-proxy is a networking proxy that proxies the UDP, TCP, and SCTP networking of each Node, and provides load balancing. This is only used to connect to Services.

      Kubernetes Master

      The Kubernetes Master is normally a separate server responsible for maintaining the desired state of the cluster. It does this by telling the Nodes how many instances of your application it should run and where. The Kubernetes Master runs three processes:

      • kube-apiserver is the front end for the Kubernetes API server.
      • kube-controller-manager is a daemon that manages the Kubernetes control loop. For more on Controllers, see the Controllers section.
      • kube-scheduler is a function that looks for newly created Pods that have no Nodes, and assigns them a Node based on a host of requirements. For more information on kube-scheduler, consult the Kubernetes kube-scheduler documentation.

      Additionally, the Kubernetes Master runs the database etcd. Etcd is a highly available key-value store that provides the backend database for Kubernetes.

      Together, kube-apiserver, kube-controller-manager, kube-scheduler, and etcd form what is known as the control plane. The control plane is responsible for making decisions about the cluster, and pushing it toward the desired state.

      Kubernetes Objects

      In Kubernetes, there are a number of objects that are abstractions of your Kubernetes system’s desired state. These objects represent your application, its networking, and disk resources – all of which together form your application.

      Pods

      In Kubernetes, all containers exist within Pods. Pods are the smallest unit of the Kubernetes architecture, and can be viewed as a kind of wrapper for your container. Each Pod is given its own IP address with which it can interact with other Pods within the cluster.

      Usually, a Pod contains only one container, but a Pod can contain multiple containers if those containers need to share resources. If there is more than one container in a Pod, these containers can communicate with one another via localhost.

      Pods in Kubernetes are “mortal,” which means that they are created, and destroyed depending on the needs of the application. For instance, you might have a web app backend that sees a spike in CPU usage. This might cause the cluster to scale up the amount of backend Pods from two to ten, in which case eight new Pods would be created. Once the traffic subsides, the Pods might scale back to two, in which case eight pods would be destroyed.

      It is important to note that Pods are destroyed without respect to which Pod was created first. And, while each Pod has its own IP address, this IP address will only be available for the life-cycle of the Pod.

      Below is an example of a Pod manifest:

      my-apache-pod.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      
      apiVersion: v1
      kind: Pod
      metadata:
       name: apache-pod
       labels:
         app: web
      spec:
        containers:
        - name: apache-container
          image: httpd

      Each manifest has four necessary parts:

      • The version of the API in use
      • The kind of resource you’d like to define
      • Metadata about the resource
      • Though not required by all objects, a spec which describes the desired behavior of the resource is necessary for most objects and controllers.

      In the case of this example, the API in use is v1, and the kind is a Pod. The metadata field is used for applying a name, labels, and annotations. Names are used to differentiate resources, while labels are used to group like resources. Labels will come into play more when defining Services and Deployments. Annotations are for attaching arbitrary data to the resource.

      The spec is where the desired state of the resource is defined. In this case, a Pod with a single Apache container is desired, so the containers field is supplied with a name, ‘apache-container’, and an image, the latest version of Apache. The image is pulled from Docker Hub, as that is the default container registry for Kubernetes.

      For more information on the type of fields you can supply in a Pod manifest, refer to the Kubernetes Pod API documentation.

      Now that you have the manifest, you can create the Pod using the create command:

      kubectl create -f my-apache-pod.yaml
      

      To view a list of your pods, use the get pods command:

      kubectl get pods
      

      You should see output like the following:

      NAME         READY   STATUS    RESTARTS   AGE
      apache-pod   1/1     Running   0          16s
      

      To quickly view which Node the Pod exists on, issue the get pods command with the -o=wide flag:

      kubectl get pods -o=wide
      

      To retrieve information about the Pod, issue the describe command:

      kubcetl describe pod apache-pod
      

      You should see output like the following:

      ...
      Events:
      Type    Reason     Age    From                       Message
      ----    ------     ----   ----                       -------
      Normal  Scheduled  2m38s  default-scheduler          Successfully assigned default/apache-pod to mycluster-node-1
      Normal  Pulling    2m36s  kubelet, mycluster-node-1  pulling image "httpd"
      Normal  Pulled     2m23s  kubelet, mycluster-node-1  Successfully pulled image "httpd"
      Normal  Created    2m22s  kubelet, mycluster-node-1  Created container
      Normal  Started    2m22s  kubelet, mycluster-node-1  Started container
      

      To delete the Pod, issue the delete command:

      kubectl delete pod apache-pod
      

      Services

      Services group identical Pods together to provide a consistent means of accessing them. For instance, you might have three Pods that are all serving a website, and all of those Pods need to be accessible on port 80. A Service can ensure that all of the Pods are accessible at that port, and can load balance traffic between those Pods. Additionally, a Service can allow your application to be accessible from the internet. Each Service is given an IP address and a corresponding local DNS entry. Additionally, Services exist across Nodes. If you have two replica Pods on one Node and an additional replica Pod on another Node, the service can include all three Pods. There are four types of Service:

      • ClusterIP: Exposes the Service internally to the cluster. This is the default setting for a Service.
      • NodePort: Exposes the Service to the internet from the IP address of the Node at the specified port number. You can only use ports in the 30000-32767 range.
      • LoadBalancer: This will create a load balancer assigned to a fixed IP address in the cloud, so long as the cloud provider supports it. In the case of Linode, this is the responsibility of the Linode Cloud Controller Manager, which will create a NodeBalancer for the cluster. This is the best way to expose your cluster to the internet.
      • ExternalName: Maps the service to a DNS name by returning a CNAME record redirect. ExternalName is good for directing traffic to outside resources, such as a database that is hosted on another cloud.

      Below is an example of a Service manifest:

      my-apache-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: apache-service
        labels:
          app: web
      spec:
        type: NodePort
        ports:
        - port: 80
          targetPort: 80
          nodePort: 30020
        selector:
          app: web

      The above example Service uses the v1 API, and its kind is Service. Like the Pod example in the previous section, this manifest has a name and a label. Unlike the Pod example, this spec uses the ports field to define the exposed port on the container (port), and the target port on the Pod (targetPort). The type NodePort unlocks the use of nodePort field, which allows traffic on the host Node at that port. Lastly, the selector field is used to target only the Pods that have been assigned the app: web label.

      For more information on Services, visit the Kubernetes Service API documentation.

      To create the Service from the YAML file, issue the create command:

      kubectl create -f my-apache-service.yaml
      

      To view a list of running services, issue the get services command:

      kubectl get services
      

      You should see output like the following:

      NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
      apache-service   NodePort    10.99.57.13   <none>        80:30020/TCP   54s
      kubernetes       ClusterIP   10.96.0.1     <none>        443/TCP        46h
      

      To retrieve more information about your Service, issue the describe command:

      kubectl describe service apache-service
      

      To delete the Service, issue the delete command:

      kubcetl delete service apache-service
      

      Volumes

      A Volume in Kubernetes is a way to share file storage between containers in a Pod. Kubernetes Volumes differ from Docker volumes because they exist inside the Pod rather than inside the container. When a container is restarted the Volume persists. Note, however, that these Volumes are still tied to the lifecycle of the Pod, so if the Pod is destroyed the Volume will be destroyed with it.

      Linode also offers a Container Storage Interface (CSI) driver that allows the cluster to persist data on a Block Storage volume.

      Below is an example of how to create and use a Volume by creating a Pod manifest:

      my-apache-pod-with-volume.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      apiVersion: v1
      kind: Pod
      metadata:
        name: apache-with-volume
      spec:
        volumes:
        - name: apache-storage-volume
          emptyDir: {}
      
        containers:
        - name: apache-container
          image: httpd
          volumeMounts:
          - name: apache-storage-volume
            mountPath: /data/apache-data

      A Volume has two unique aspects to its definition. In this example, the first aspect is the volumes block that defines the type of Volume you want to create, which in this case is a simple empty directory (emptyDir). The second aspect is the volumeMounts field within the container’s spec. This field is given the name of the Volume you are creating and a mount path within the container.

      There are a number of different Volume types you could create in addition to emptyDir depending on your cloud host. For more information on Volume types, visit the Kubernetes Volumes API documentation.

      Namespaces

      Namespaces are virtual clusters that exist within the Kubernetes cluster that help to group and organize objects. Every cluster has at least three namespaces: default, kube-system, and kube-public. When interacting with the cluster it is important to know which Namespace the object you are looking for is in, as many commands will default to only showing you what exists in the default namespace. Resources created without an explicit namespace will be added to the default namespace.

      Namespaces consist of alphanumeric characters, dashes (-), and periods (.).

      Here is an example of how to define a Namespace with a manifest:

      my-namespace.yaml
      1
      2
      3
      4
      
      apiVersion: v1
      kind: Namespace
      metadata:
        name: my-app

      To create the Namespace, issue the create command:

      kubcetl create -f my-namespace.yaml
      

      Below is an example of a Pod with a Namespace:

      my-apache-pod-with-namespace.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      apiVersion: v1
      kind: Pod
      metadata:
        name: apache-pod
        labels:
          app: web
        namespace: my-app
      spec:
        containers:
        - name: apache-container
          image: httpd

      To retrieve resources in a certain Namespace, use the -n flag.

      kubectl get pods -n my-app
      

      You should see a list of Pods within your namespace:

      NAME         READY   STATUS    RESTARTS   AGE
      apache-pod   1/1     Running   0          7s
      

      To view Pods in all Namespaces, use the --all-namespaces flag.

      kubectl get pods --all-namespaces
      

      To delete a Namespace, issue the delete namespace command. Note that this will delete all resources within that Namespace:

      kubectl delete namespace my-app
      

      For more information on Namespaces, visit the Kubernetes Namespaces API documentation

      Controllers

      A Controller is a control loop that continuously watches the Kubernetes API and tries to manage the desired state of certain aspects of the cluster. There are a number of controllers. Below is a short reference of the most popular controllers you might interact with.

      ReplicaSets

      As has been mentioned, Kubernetes allows an application to scale horizontally. A ReplicaSet is one of the controllers responsible for keeping a given number of replica Pods running. If one Pod goes down in a ReplicaSet, another will be created to replace it. In this way, Kubernetes is self-healing. However, for most use cases it is recommended to use a Deployment instead of a ReplicaSet.

      Below is an example of a ReplicaSet:

      my-apache-replicaset.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: apps/v1
      kind: ReplicaSet
      metadata:
        name: apache-replicaset
        labels:
          app: web
      spec:
        replicas: 5
        selector:
          matchLabels:
            app: web
        template:
          metadata:
            labels:
              app: web
          spec:
            containers:
            - name: apache-container
              image: httpd

      There are three main things to note in this ReplicaSet. The first is the apiVersion, which is apps/v1. This differs from the previous examples, which were all apiVersion: v1, because ReplicaSets do not exist in the v1 core. They instead reside in the apps group of v1. The second and third things to note are the replicas field and the selector field. The replicas field defines how many replica Pods you want to be running at any given time. The selector field defines which Pods, matched by their label, will be controlled by the ReplicaSet.

      To view your ReplicaSets, issue the get replicasets command:

      kubectl get replicasets
      

      You should see output like the following:

      NAME                DESIRED   CURRENT   READY   AGE
      apache-replicaset   5         5         0       5s
      

      This output shows that of the five desired replicas, there are 5 currently active, but zero of those replicas are available. This is because the Pods are still booting up. If you issue the command again, you will see that all five have become ready:

      NAME                DESIRED   CURRENT   READY   AGE
      apache-replicaset   5         5         5       86s
      

      You can view the Pods the ReplicaSet created by issuing the get pods command:

      NAME                      READY   STATUS    RESTARTS   AGE
      apache-replicaset-5rsx2   1/1     Running   0          31s
      apache-replicaset-8n52c   1/1     Running   0          31s
      apache-replicaset-jcgn8   1/1     Running   0          31s
      apache-replicaset-sj422   1/1     Running   0          31s
      apache-replicaset-z8g76   1/1     Running   0          31s
      

      To delete a ReplicaSet, issue the delete replicaset command:

      kubectl delete replicaset apache-replicaset
      

      If you issue the get pods command, you will see that the Pods the ReplicaSet created are in the process of terminating:

      NAME                      READY   STATUS        RESTARTS   AGE
      

      apache-replicaset-bm2pn 0/1 Terminating 0 3m54s

      In the above example, four of the Pods have already terminated, and one is in the process of terminating.

      For more information on ReplicaSets, view the Kubernetes ReplicaSets API documentation.

      Deployments

      A Deployment can manage a ReplicaSet, so it shares the ability to keep a defined number of replica pods up and running. A Deployment can also update those Pods to resemble the desired state by means of rolling updates. For example, if you wanted to update a container image to a newer version, you would create a Deployment, and the controller would update the container images one by one until the desired state is achieved. This ensures that there is no downtime when updating or altering your Pods.

      Below is an example of a Deployment:

      my-apache-deployment.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: apache-deployment
        labels:
          app: web
      spec:
        replicas: 5
        selector:
          matchLabels:
            app: web
        template:
          metadata:
            labels:
              app: web
          spec:
            containers:
            - name: apache-container
              image: httpd:2.4.35

      The only noticeable difference between this Deployment and the example given in the ReplicaSet section is the kind. In this example we have chosen to initially install Apache 2.4.35. If you wanted to update that image to Apache 2.4.38, you would issue the following command:

      kubectl --record deployment.apps/apache-deployment set image deployment.v1.apps/apache-deployment apache-container=httpd:2.4.38
      

      You’ll see a confirmation that the images have been updated:

      deployment.apps/apache-deployment image updated
      

      To see for yourself that the images have updated, you can grab the Pod name from the get pods list:

      kubectl get pods
      
      NAME                                 READY   STATUS    RESTARTS   AGE
      apache-deployment-574c8c4874-8zwgl   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-9pr5j   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-fbs46   1/1     Running   0          8m34s
      apache-deployment-574c8c4874-nn7dl   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-pndgp   1/1     Running   0          8m33s
      

      Issue the describe command to view all of the available details of the Pod:

      kubectl describe pod apache-deployment-574c8c4874-pndgp
      

      You’ll see a long list of details, of which the container image is included:

      ....
      
      Containers:
        apache-container:
          Container ID:   docker://d7a65e7993ab5bae284f07f59c3ed422222100833b2769ff8ee14f9f384b7b94
          Image:          httpd:2.4.38
      
      ....
      

      For more information on Deployments, visit the Kubernetes Deployments API documentation

      Jobs

      A Job is a controller that manages a Pod that is created for a single, or set, of tasks. This is handy if you need to create a Pod that performs a single function, or calculates a value. The deletion of the Job will delete the Pod.

      Below is an example of a Job that simply prints “Hello World!” and ends:

      my-job.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      
      apiVersion: batch/v1
      kind: Job
      metadata:
        name: hello-world
      spec:
        template:
          metadata:
            name: hello-world
          spec:
            containers:
            - name: output
              image: debian
              command:
               - "bin/bash"
               - "-c"
               - "echo 'Hello World!'"
            restartPolicy: Never

      To create the Job, issue the create command:

      kubectl create -f my-job.yaml
      

      To see if the job has run, or is running, issue the get jobs command:

      kubectl get jobs
      

      You should see output like the following:

      NAME          COMPLETIONS   DURATION   AGE
      hello-world   1/1           9s         8m23s
      

      To get the Pod of the Job, issue the get pods command:

      kubectl get pods
      

      You should see an output like the following:

      NAME                               READY   STATUS             RESTARTS   AGE
      hello-world-4jzdm                  0/1     Completed          0          9m44s
      

      You can use the name of the Pod to inspect its output by consulting the log file for the Pod:

      kubectl get logs hello-world-4jzdm
      

      To delete the Job, and its Pod, issue the delete command:

      kubectl delete job hello-world
      

      Networking

      Networking in Kubernetes was designed to make it simple to port existing apps from VMs to containers, and subsequently, Pods. The basic requirements of the Kubernetes networking model are:

      1. Pods can communicate with each other across Nodes without the use of NAT
      2. Agents on a Node, like kubelet, can communicate with all of a Node’s Pods
      3. In the case of Linux, Pods in a Node’s host network can communicate to all other Pods without NAT.

      Though the rules of the Kubernetes networking model are simple, the implementation of those rules is an advanced topic. Because Kubernetes does not come with its own implementation, it is up to the user to provide a networking model.

      Two of the most popular options are Flannel and Calico. Flannel is a networking overlay that meets the functionality of the Kubernetes networking model by supplying a layer 3 network fabric, and is relatively easy to set up. Calico enables networking, and networking policy through the NetworkPolicy API to provide simple virtual networking.

      For more information on the Kubernetes networking model, and ways to implement it, consult the cluster networking documentation.

      Advanced Topics

      There are a number of advanced topics in Kubernetes. Below are a few you might find useful as you progress in Kubernetes:

      • StatefulSets can be used when creating stateful applications.
      • DaemonSets can be used to ensure each Node is running a certain Pod. This is useful for log collection, monitoring, and cluster storage.
      • Horizontal Pod Autoscaling can automatically scale your deployments based on CPU usage.
      • CronJobs can schedule Jobs to run at certain times.
      • ResourceQuotas are helpful when working with larger groups where there is a concern that some teams might take up too many resources.

      Next Steps

      Now that you are familiar with Kubernetes concepts and components, you can follow the Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode guide. This guide provides a hands-on activity to continue learning about Kubernetes. If you would like to deploy a Kubernetes cluster on Linode for production use, we recommend using one of the following methods, instead:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Install Apps on Kubernetes with Helm


      Updated by Linode Written by Linode

      What is Helm?

      Helm is a tool that assists with installing and managing applications on Kubernetes clusters. It is often referred to as “the package manager for Kubernetes,” and it provides functions that are similar to a package manager for an operating system:

      • Helm prescribes a common format and directory structure for packaging your Kubernetes resources, known as a Helm chart.

      • Helm provides a public repository of charts for popular software. You can also retrieve charts from third-party repositories, author and contribute your own charts to someone else’s repository, or run your own chart repository.

      • The Helm client software offers commands for: listing and searching for charts by keyword, installing applications to your cluster from charts, upgrading those applications, removing applications, and other management functions.

      Charts

      The components of a Kubernetes application–deployments, services, ingresses, and other objects–are listed in manifest files (in the YAML file format). Kubernetes does not tell you how you should organize those files, though the Kubernetes documentation does offer a general set of best practices.

      Helm charts are the software packaging format for Helm. A chart specifies a file and directory structure that you follow when packaging your manifests. The structure looks as follows:

      chart-name/
        Chart.yaml
        LICENSE
        README.md
        requirements.yaml
        values.yaml
        charts/
        templates/
        templates/NOTES.txt
      
      File or Directory Description
      Chart.yaml General information about the chart, including the chart name, a version number, and a description.
      LICENSE A plain-text file with licensing information for the chart and for the applications installed by the chart. Optional.
      README.md A Markdown file with instructions that a user of a chart may want to know when installing and using the chart, including a description of the app that the chart installs and the template values that can be set by the user. Optional.
      requirements.yaml A listing of the charts that this chart depends on. This list will specify the chart name version number for each dependency, as well as the repository URL that the chart can be retrieved from. Optional.
      values.yaml Default values for the variables in your manifests’ templates.
      charts/ A directory which stores chart dependencies that you manually copy into your project, instead of linking to them from the requirements.yaml file.
      templates/ Your Kubernetes manifests are stored in the templates/ directory. Helm will interpret your manifests using the Go templating language before applying them to your cluster. You can use the template language to insert variables into your manifests, and users of your chart will be able to enter their own values for those variables.
      templates/NOTES.txt A plain-text file which will print to a user’s terminal when they install the chart. This text can be used to display post-installation instructions or other information that a user may want to know. Optional.

      Releases

      When you tell Helm to install a chart, you can specify variable values to be inserted into the chart’s manifest templates. Helm will then compile those templates into manifests that can be applied to your cluster. When it does this, it creates a new release.

      You can install a chart to the same cluster more than once. Each time you tell Helm to install a chart, it creates another release for that chart. A release can be upgraded when a new version of a chart is available, or even when you just want to supply new variable values to the chart. Helm tracks each upgrade to your release, and it allows you to roll back an upgrade. A release can be easily deleted from your cluster, and you can even roll back release deletions.

      Helm Client and Helm Tiller

      Helm operates with two components:

      • The Helm client software that issues commands to your cluster. You run the client software on your computer, in your CI/CD environment, or anywhere else you’d like

      • A server component runs on your cluster and receives commands from the Helm client software. This component is called Tiller. Tiller is responsible for directly interacting with the Kubernetes API (which the client software does not do). Tiller maintains the state for your Helm releases.

      Before You Begin

      1. Install the Kubernetes CLI (kubectl) on your computer, if it is not already.

      2. You should have a Kubernetes cluster running prior to starting this guide. One quick way to get a cluster up is with Linode’s k8s-alpha CLI command. This guide’s examples only require a cluster with one worker node. We recommend that you create cluster nodes that are at the Linode 4GB tier or higher.

        This guide also assumes that your cluster has role-based access control (RBAC) enabled. This feature became available in Kubernetes 1.6. It is enabled on clusters created via the k8s-alpha Linode CLI.

        Note

        This guide’s example instructions will also result in the creation of a Block Storage Volume and a NodeBalancer, which are also billable resources. If you do not want to keep using the example application after you finish reviewing your guide, make sure to delete these resources afterward.
      3. You should also make sure that your Kubernetes CLI is using the right cluster context. Run the get-contexts subcommand to check:

        kubectl config get-contexts
        
      4. You can set kubectl to use a certain cluster context with the use-context subcommand and the cluster name that was previously output from the get-contexts subcommand:

        kubectl config use-context your-cluster-name
        
      5. It is beneficial to have a registered domain name for this guide’s example app, but it is not required.

      Install Helm

      Install the Helm Client

      Install the Helm client software on your computer:

      • Linux. Run the client installer script that Helm provides:

        curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
        chmod 700 get_helm.sh
        ./get_helm.sh
        
      • macOS. Use Homebrew to install:

        brew install kubernetes-helm
        
      • Windows. Use Chocolatey to install:

        choco install kubernetes-helm
        

      Install Tiller on your Cluster

      Tiller’s default installation instructions will attempt to install it without adequate permissions on a cluster with RBAC enabled, and it will fail. Alternative instructions are available which grant Tiller the appropriate permissions:

      Note

      The following instructions provide Tiller to the cluster-admin role, which is a privileged Kubernetes API user for your cluster. This is a potential security concern. Other access levels for Tiller are possible, like restricting Tiller and the charts it installs to a single namespace. The Bitnami Engineering blog has an article which further explores security in Helm.
      1. Create a file on your computer named rbac-config.yaml with the following snippet:

        rbac-config.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        
        apiVersion: v1
        kind: ServiceAccount
        metadata:
          name: tiller
          namespace: kube-system
        ---
        apiVersion: rbac.authorization.k8s.io/v1
        kind: ClusterRoleBinding
        metadata:
          name: tiller
        roleRef:
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: cluster-admin
        subjects:
          - kind: ServiceAccount
            name: tiller
            namespace: kube-system

        This configuration creates a Kubernetes Service Account for Tiller, and then binds it to the cluster-admin role.

      2. Apply this configuration to your cluster:

        kubectl create -f rbac-config.yaml
        
          
        serviceaccount "tiller" created
        clusterrolebinding "tiller" created
        
        
      3. Initialize Tiller on the cluster:

        helm init --service-account tiller --history-max 200
        

        Note

        The --history-max option prevents Helm’s historical record of the objects it tracks from growing too large.

      4. You should see output like:

        $HELM_HOME has been configured at /Users/your-user/.helm.
        
        Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
        
        Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
        To prevent this, run `helm init` with the --tiller-tls-verify flag.
        For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
        Happy Helming!
        
      5. The pod for Tiller will be running in the kube-system namespace:

        kubectl get pods --namespace kube-system | grep tiller
        tiller-deploy-b6647fc9d-vcdms                1/1       Running   0          1m
        

      Use Helm Charts to Install Apps

      This guide will use the Ghost publishing platform as the example application.

      Search for a Chart

      1. Run the repo update subcommand to make sure you have a full list of available charts:

        helm repo update
        

        Note

        Run helm repo list to see which repositories are registered with your client.

      2. Run the search command with a keyword to search for a chart by name:

        helm search ghost
        

        The output will look like:

        NAME            CHART VERSION   APP VERSION DESCRIPTION
        stable/ghost    6.7.7           2.19.4      A simple, powerful publishing platform that allows you to...
        
      3. The full name for the chart is stable/ghost. Inspect the chart for more information:

        helm inspect stable/ghost
        

        This command’s output will resemble the README text available for the Ghost chart in the official Helm chart repository on GitHub.

      Install the Chart

      The helm install command is used to install a chart by name. It can be run without any other options, but some charts expect you to pass in configuration values for the chart:

      1. Create a file named ghost-config.yaml on your computer from this snippet:

        ghost-config.yaml
        1
        2
        
        ghostHost: ghost.example.com
        ghostEmail: email@example.com

        Replace the value for ghostHost with a domain or subdomain that you own and would like to assign to the app, and the value for ghostEmail with your email.

        Note

        If you don’t own a domain name and won’t continue to use the Ghost website after finishing this guide, you can make up a domain for this configuration file.

      2. Run the install command and pass in the configuration file:

        helm install -f ghost-config.yaml stable/ghost
        
      3. The install command returns immediately and does not wait until the app’s cluster objects are ready. You will see output like the following snippet, which shows that the app’s pods are still in the “Pending” state. The text displayed is generated from the contents of the chart’s templates/NOTES.txt file:

        Full output of helm install

        NAME:   oldfashioned-cricket
        LAST DEPLOYED: Tue Apr 16 09:15:41 2019
        NAMESPACE: default
        STATUS: DEPLOYED
        
        RESOURCES:
        ==> v1/ConfigMap
        NAME                      DATA  AGE
        oldfashioned-cricket-mariadb        1     1s
        oldfashioned-cricket-mariadb-tests  1     1s
        
        ==> v1/PersistentVolumeClaim
        NAME              STATUS   VOLUME                CAPACITY  ACCESS MODES  STORAGECLASS  AGE
        oldfashioned-cricket-ghost  Pending  linode-block-storage  1s
        
        ==> v1/Pod(related)
        NAME                               READY  STATUS   RESTARTS  AGE
        oldfashioned-cricket-ghost-64ff89b9d6-9ngjs  0/1    Pending  0         1s
        oldfashioned-cricket-mariadb-0               0/1    Pending  0         1s
        
        ==> v1/Secret
        NAME                TYPE    DATA  AGE
        oldfashioned-cricket-ghost    Opaque  1     1s
        oldfashioned-cricket-mariadb  Opaque  2     1s
        
        ==> v1/Service
        NAME                TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)       AGE
        oldfashioned-cricket-ghost    LoadBalancer  10.110.3.191    <pending>    80:32658/TCP  1s
        oldfashioned-cricket-mariadb  ClusterIP     10.107.128.144  <none>       3306/TCP      1s
        
        ==> v1beta1/Deployment
        NAME              READY  UP-TO-DATE  AVAILABLE  AGE
        oldfashioned-cricket-ghost  0/1    1           0          1s
        
        ==> v1beta1/StatefulSet
        NAME                READY  AGE
        oldfashioned-cricket-mariadb  0/1    1s
        
        
        NOTES:
        1. Get the Ghost URL by running:
        
          echo Blog URL  : http://ghost.example.com/
          echo Admin URL : http://ghost.example.com/ghost
        
        2. Get your Ghost login credentials by running:
        
          echo Email:    email@example.com
          echo Password: $(kubectl get secret --namespace default oldfashioned-cricket-ghost -o jsonpath="{.data.ghost-password}" | base64 --decode)
        
      4. Helm has created a new release and assigned it a random name. Run the ls command to get a list of all of your releases:

        helm ls
        

        The output will look as follows:

        NAME        REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
        oldfashioned-cricket    1           Tue Apr 16 09:15:41 2019    DEPLOYED    ghost-6.7.7 2.19.4      default
        
      5. You can check on the status of the release by running the status command:

        helm status oldfashioned-cricket
        

        This command will show the same output that was displayed after the helm install command, but the current state of the cluster objects will be updated.

      Access your App

      1. Run the helm status command again and observe the “Service” section:

        ==> v1/Service
        NAME                TYPE          CLUSTER-IP      EXTERNAL-IP     PORT(S)       AGE
        oldfashioned-cricket-ghost    LoadBalancer  10.110.3.191    104.237.148.15  80:32658/TCP  11m
        oldfashioned-cricket-mariadb  ClusterIP     10.107.128.144  <none>          3306/TCP      11m
        
      2. The LoadBalancer that was created for the app will be displayed. Because this example uses a cluster created with Linode’s k8s-alpha CLI (which pre-installs the Linode CCM), the LoadBalancer will be implemented as a Linode NodeBalancer.

      3. Copy the value under the EXTERNAL-IP column for the LoadBalancer and then paste it into your web browser. You should see the Ghost website:

        Ghost home page

      4. Revisit the output from the status command. Instructions for logging into your Ghost website will be displayed:

        1. Get the Ghost URL by running:
        
        echo Blog URL  : http://ghost.example.com/
        echo Admin URL : http://ghost.example.com/ghost
        
        2. Get your Ghost login credentials by running:
        
        echo Email:    email@example.com
        echo Password: $(kubectl get secret --namespace default oldfashioned-cricket-ghost -o jsonpath="{.data.ghost-password}" | base64 --decode)
        
      5. Retrieve the auto-generated password for your app:

        echo Password: $(kubectl get secret --namespace default oldfashioned-cricket-ghost -o jsonpath="{.data.ghost-password}" | base64 --decode)
        
      6. You haven’t set up DNS for your site yet, but you can instead access the admin interface by visiting the ghost URL on your LoadBalancer IP address (e.g. http://104.237.148.15/ghost). Visit this page in your browser and then enter your email and password. You should be granted access to the administrative interface.

      7. Set up DNS for your app. You can do this by creating an A record for your domain which is assigned to the external IP for your app’s LoadBalancer. Review Linode’s DNS Manager guide for instructions.

      Upgrade your App

      The upgrade command can be used to upgrade an existing release to a new version of a chart, or just to supply new chart values:

      1. In your computer’s ghost-config.yaml file, add a line for the title of the website:

        ghost-config.yaml
        1
        2
        3
        
        ghostHost: ghost.example.com
        ghostEmail: email@example.com
        ghostBlogTitle: Example Site Name
      2. Run the upgrade command, specifying the configuration file, release name, and chart name:

        helm upgrade -f ghost-config.yaml oldfashioned-cricket stable/ghost
        

      Roll Back a Release

      Upgrades (and even deletions) can be rolled back if something goes wrong:

      1. Run the helm ls command and observe the number under the “REVISION” column for your release:

        NAME        REVISION    UPDATED                     STATUS      CHART       APP VERSION NAMESPACE
        oldfashioned-cricket    2           Tue Apr 16 10:02:58 2019    DEPLOYED    ghost-6.7.7 2.19.4      default
        
      2. Every time you perform an upgrade, the revision count is incremented by 1 (and the counter starts at 1 when you first install a chart). So, your current revision number is 2. To roll back the upgrade you just performed, enter the previous revision number:

        helm rollback oldfashioned-cricket 1
        

      Delete a Release

      1. Use the delete command with the name of a release to delete it:

        helm delete oldfashioned-cricket
        

        You should also confirm in the Linode Cloud Manager that the Volumes and NodeBalancer created for the app are removed as well.

      2. Helm will still save information about the deleted release. You can list deleted releases:

        helm list --deleted
        

        You can use the revision number of a deleted release to roll back the deletion.

      3. To fully remove a release, use the --purge option with the delete command:

        helm delete oldfashioned-cricket --purge
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Deploy Persistent Volume Claims with the Linode Block Storage CSI Driver


      Updated by Linode Written by Linode Community

      What is the Linode Block Storage CSI Driver?

      The Container Storage Interface (CSI) defines a standard that storage providers can use to expose block and file storage systems to container orchestration systems. Linode’s Block Storage CSI driver follows this specification to allow container orchestration systems, like Kubernetes, to use Block Storage Volumes to persist data despite a Pod’s lifecycle. A Block Storage Volume can be attached to any Linode to provide additional storage.

      Before You Begin

      • This guide assumes you have a working Kubernetes cluster running on Linode. You can deploy a Kubernetes cluster on Linode in the following ways:

        1. Use Linode’s k8s-alpha CLI to deploy a Kubernetes cluster via the command line.

        2. Deploy a cluster using Terraform and the Linode Kubernetes Terraform installer.

        3. Use kubeadm to manually deploy a Kubernetes cluster on Linode. You can follow the Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode guide to do this.

        Note

        • If using the k8s-alpha CLI or the Linode Kubernetes Terraform installer methods to deploy a cluster, you can skip the Installing the CSI Driver section of this guide, since it will be automatically installed when you deploy a cluster.

          Move on to the Attach a Pod to the Persistent Volume Claim section to learn how to consume a Block Storage volume as part of your deployment.

      • The Block Storage CSI supports Kubernetes version 1.13 or higher. To check the version of Kubernetes you are running, you can issue the following command:

        kubectl version
        

      Installing the CSI Driver

      Create a Kubernetes Secret

      A secret in Kubernetes is any token, password, or credential that you want Kubernetes to store for you. In the case of the Block Storage CSI, you’ll want to store an API token, and for convenience, the region you would like your Block Storage Volume to be placed in.

      Note

      Your Block Storage Volume must be in the same data center as your Kubernetes cluster.

      To create an API token:

      1. Log into the Linode Cloud Manager.

      2. Navigate to your account profile by clicking on your username at the top of the page and selecting My Profile. On mobile screen resolutions, this link is in the sidebar navigation.

      3. Click on the API Tokens tab.

      4. Click on Add a Personal Access Token. The Add Personal Access Token menu appears.

      5. Provide a label for the token. This is how you will reference your token within the Cloud Manager.

      6. Set an expiration date for the token with the Expiry dropdown.

      7. Set your permissions for the token. You will need Read/Write access for Volumes, and Read/Write access for Linodes.

      8. Click Submit.

      Your access token will appear on the screen. Copy this down somewhere safe, as once you click OK you will not be able to retrieve the token again, and will need to create a new one.

      Once you have your API token, it’s time to create your secret.

      1. Run the following command to enter your token into memory:

        read -s -p "Linode API Access Token: " LINODE_TOKEN
        

        Press enter, and then paste in your API token.

      2. Run the following command to enter your region into memory:

        read -p "Linode Region of Cluster: " LINODE_REGION
        

        You can retrieve a full list of regions by using the Linode CLI:

        linode-cli regions list
        

        For example, if you want to use the Newark, NJ, USA data center, you would use us-east as your region.

      3. Create the secret by piping in the following secret manifest to the kubectl create command. Issue the following here document:

        cat <<EOF | kubectl create -f -
        
      4. Now, paste in the following manifest and press enter:

        apiVersion: v1
        kind: Secret
        metadata:
          name: linode
          namespace: kube-system
        stringData:
          token: "$LINODE_TOKEN"
          region: "$LINODE_REGION"
        EOF
        

      You can check to see if the command was successful by running the get secrets command in the kube-system namespaces and looking for linode in the NAME column of the output:

      kubectl -n kube-system get secrets
      

      You should see output similar to the following:

      NAME                                             TYPE                                  DATA   AGE
      ...
      job-controller-token-6zzkw                       kubernetes.io/service-account-token   3      43h
      kube-proxy-token-td7k8                           kubernetes.io/service-account-token   3      43h
      linode                                           Opaque                                2      42h
      ...
      

      You are now ready to install the Block Storage CSI driver.

      Apply CSI Driver to your Cluster

      To install the Block Storage CSI driver, use the apply command and specify the following URL:

      kubectl apply -f https://raw.githubusercontent.com/linode/linode-blockstorage-csi-driver/master/pkg/linode-bs/deploy/releases/linode-blockstorage-csi-driver-v0.0.3.yaml
      

      The above file concatenates a few files needed to run the Block Storage CSI driver, including the volume attachment, driver registration, and provisioning sidecars. To see these files individually, visit the project’s GitHub repository.

      Once you have the Block Storage CSI driver installed, you are ready to provision a Persistent Volume Claim.

      Create a Persistent Volume Claim

      Caution

      The instructions in this section will create a Block Storage volume billable resource on your Linode account. A single volume can range from 10 GiB to 10,000 GiB in size and costs $0.10/GiB per month or $0.00015/GiB per hour. If you do not want to keep using the Block Storage volume that you create, be sure to delete it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

      A Persistent Volume Claim (PVC) consumes a Block Storage Volume. To create a PVC, create a manifest file with the following YAML:

      pvc.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-example
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: linode-block-storage

      This PVC represents a Block Storage Volume. Because Block Storage Volumes have a minimum size of 10 gigabytes, the storage has been set to 10Gi. If you choose a size smaller than 10 gigabytes, the PVC will default to 10 gigabytes.

      Currently the only mode supported by the Linode Block Storage CSI driver is ReadWriteOnce, meaning that it can only be connected to one Kubernetes node at a time.

      To create the PVC in Kubernetes, issue the create command and pass in the pvc.yaml file:

      kubectl create -f pvc.yaml
      

      After a few moments your Block Storage Volume will be provisioned and your Persistent Volume Claim will be ready to use.

      You can check the status of your PVC by issuing the following command:

      kubectl get pvc
      

      You should see output like the following:

      NAME          STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS           AGE
      pvc-example   Bound    pvc-0e95b811652111e9   10Gi       RWO            linode-block-storage   2m
      

      Now that you have a PVC, you can attach it to a Pod.

      Attach a Pod to the Persistent Volume Claim

      Now you need to instruct a Pod to use the Persistent Volume Claim. For this example, you will create a Pod that is running an ownCloud container, which will use the PVC.

      To create a pod that will use the PVC:

      1. Create a manifest file for the Pod and give it the following YAML:

        owncloud-pod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        apiVersion: v1
        kind: Pod
        metadata:
          name: owncloud
          labels:
            app: owncloud
        spec:
          containers:
            - name: owncloud
              image: owncloud/server
              ports:
                - containerPort: 8080
              volumeMounts:
              - mountPath: "/mnt/data/files"
                name: pvc-example
          volumes:
            - name: pvc-example
              persistentVolumeClaim:
                claimName: pvc-example

        This Pod will run the owncloud/server Docker container image. Because ownCloud stores its files in the /mnt/data/files directory, this owncloud-pod.yaml manifest instructs the ownCloud container to create a mount point at that file path for your PVC.

        In the volumes section of the owncloud-pod.yaml, it is important to set the claimName to the exact name you’ve given your PersistentVolumeClaim in its manifest’s metadata. In this case, the name is pvc-example.

      2. Use the create command to create the ownCloud Pod:

        kubectl create -f owncloud-pod.yaml
        
      3. After a few moments your Pod should be up and running. To see the status of your Pod, issue the get pods command:

        kubectl get pods
        

        You should see output like the following:

        NAME       READY   STATUS    RESTARTS   AGE
        owncloud   1/1     Running   0          2m
        
      4. To list the contents of the /mnt/data/files directory within the container, which is the mount point for your PVC, issue the following command on your container:

        kubectl exec -it owncloud -- /bin/sh -c "ls /mnt/data/files"
        

        You should see output similar to the following:

        admin  avatars  files_external  index.html  owncloud.db  owncloud.log
        

        These files are created by ownCloud, and those files now live on your Block Storage Volume. The admin directory is the directory for the default user, and any files you upload to the admin account will appear in this folder.

      To complete the example, you should be able to access the ownCloud Pod via your browser. To accomplish this task, you will need to create a Service.

      1. Create a Service manifest file and copy in the following YAML:

        owncloud-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        
        kind: Service
        apiVersion: v1
        metadata:
          name: owncloud
        spec:
          selector:
            app: owncloud
          ports:
          - protocol: TCP
            port: 80
            targetPort: 8080
          type: NodePort

        Note

        The service manifest file will use the NodePort method to get external traffic to the ownCloud service. NodePort opens a specific port on all cluster Nodes and any traffic that is sent to this port is forwarded to the service. Kubernetes will choose the port to open on the nodes if you do not provide one in your service manifest file. It is recommended to let Kubernetes handle the assignment. Kubernetes will choose a port in the default range, 30000-32767.

        Alternatively, you could use the LoadBalancer service type, instead of NodePort, which will create Linode NodeBalancers that will direct traffic to the ownCloud Pods. Linode’s Cloud Controller Manager (CCM) is responsible for provisioning the Linode NodeBalancers. For more details, see the Kubernetes Cloud Controller Manager for Linode repository.

      2. Create the service in Kubernetes by using the create command and passing in the owncloud-service.yaml file you created in the previous step:

        kubectl create -f owncloud-service.yaml
        
      3. To retrieve the port that the ownCloud Pod is listening on, use the describe command on the newly created Service:

        kubectl describe service owncloud
        

        You should see output like the following:

        Name:                     owncloud
        Namespace:                default
        Labels:                   <none>
        Annotations:              <none>
        Selector:                 app=owncloud
        Type:                     NodePort
        IP:                       10.106.101.155
        Port:                     <unset>  80/TCP
        TargetPort:               8080/TCP
        NodePort:                 <unset>  30068/TCP
        Endpoints:                10.244.1.17:8080
        Session Affinity:         None
        External Traffic Policy:  Cluster
        Events:                   <none>
        

        Find the NodePort. In this example the port is 30068.

      4. Now you need to find out which Node your Pod is running on. Use the describe command on the Pod to find the IP address of the Node:

        kubectl describe pod owncloud
        

        You should see output like the following:

        Name:               owncloud
        Namespace:          default
        Priority:           0
        PriorityClassName:  <none>
        Node:               kube-node/192.0.2.155
        Start Time:         Mon, 22 Apr 2019 17:07:20 +0000
        Labels:             app=owncloud
        Annotations:        <none>
        Status:             Running
        IP:                 10.244.1.17
        

        The IP address of the Node in this example is 192.0.2.155. Your ownCloud Pod in this example would be accessible from http://192.9.2.155:30068.

      5. Navigate to the URL of the Node, including the NodePort you looked up in a previous step. You will be presented with the ownCloud log in page. You can log in with the username admin and the password admin.

      6. Upload a file. You will use this file to test the Persistent Volume Claim.

      7. The Persistent Storage Claim has been created and is using your Block Storage Volume. To prove this point, you can delete the ownCloud Pod and recreate it, and the Persistent Storage Claim will continue to house your data:

        kubectl delete pod owncloud
        
        kubectl create -f owncloud-pod.yaml
        

        Once the Pod has finished provisioning you can log back in to ownCloud and view the file you previously uploaded.

      You have successfully You have successfully created a Block Storage Volume tied to a Persistent Volume Claim and have mounted it with a container in a Pod.

      Delete a Persistent Volume Claim

      To delete the Block Storage volume created in this guide:

      1. First, delete the ownCloud Pod:

        kubectl delete pods owncloud
        
      2. Then, delete the persistent volume claim:

        kubectl delete pvc pvc-example
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link