One place for hosting & domains

      Started

      Getting Started with Dedicated CPUs


      Updated by Linode

      Written by Ryan Syracuse

      This guide will serve as a brief introduction into what a Dedicated CPU Linode is and how to add one to your Linode account. Review our Use Cases for Dedicated CPUs guide for more information about the tasks that work well on this instance type.

      What is a Dedicated CPU Linode?

      In contrast with a Standard Linode, which gives you access to shared virtual CPU cores, a Dedicated CPU Linode offers entire physical CPU cores that are accessible only by your instance. Because your cores will be isolated to your Linode, no other Linodes can schedule processes on them, so your instance will never have to wait for another process to complete its execution, and your software can run at peak speed and efficiency.

      While a Standard Linode is a good fit for most use cases, a Dedicated CPU Linode is recommended for a number of workloads related to high, sustained CPU processing, including:

      Deploying a Dedicated CPU Linode

      Create a Dedicated CPU Linode in the Cloud Manager

      1. Log in to the Linode Cloud Manager.

      2. Click on the Create dropdown menu at the top left of the page, and select the Linode option.

      3. Select a Distribution, One-Click App, or Image to deploy from.

        Note

      4. Choose the region where you would like your Linode to reside. If you’re not sure which to select, see our How to Choose a Data Center guide. You can also generate MTR reports for a deeper look at the network route between you and each of our data centers.

      5. At the top of the Linode Plan section, click on the Dedicated CPU tab and select the Dedicated CPU plan you would like to use.

      6. Enter a label for your new Linode under the Linode Label field.

      7. Enter a strong root password for your Linode in the Root Password field. This password must be at least six characters long and contain characters from at least two of the following categories:

        • lowercase letters
        • uppercase letters
        • numbers
        • punctuation characters

        Note

        You will not be prompted to enter a root password if you are cloning another Linode or restoring from the Linode Backups service.

      8. Optionally, add an SSH key, Backups, or a Private IP address.

      9. Click the Create button when you have finished completing this form. You will be redirected to the overview page for your new Linode. This page will show a progress bar which will indicate when the Linode has been provisioned and is ready for use.

      Next Steps

      See our Getting Started guide for help with connecting to your Linode for the first time and configuring the software on it. Then visit the How to Secure Your Server guide for a collection of security best practices for your new Linode.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with kubectl: A kubectl Cheat Sheet


      Introduction

      Kubectl is a command-line tool designed to manage Kubernetes objects and clusters. It provides a command-line interface for performing common operations like creating and scaling Deployments, switching contexts, and accessing a shell in a running container.

      How to Use This Guide:

      • This guide is in cheat sheet format with self-contained command-line snippets.
      • It is not an exhaustive list of kubectl commands, but contains many common operations and use cases. For a more thorough reference, consult the Kubectl Reference Docs
      • Jump to any section that is relevant to the task you are trying to complete.

      Prerequisites

      Sample Deployment

      To demonstrate some of the operations and commands in this cheat sheet, we’ll use a sample Deployment that runs 2 replicas of Nginx:

      nginx-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx-deployment
      spec:
        replicas: 2
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
              ports:
              - containerPort: 80
      

      Copy and paste this manifest into a file called nginx-deployment.yaml.

      Installing kubectl

      Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to install kubectl on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

      First, update your local package index and install required dependencies:

      • sudo apt-get update && sudo apt-get install -y apt-transport-https

      Then add the Google Cloud GPG key to APT and make the kubectl package available to your system:

      • curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
      • echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
      • sudo apt-get update

      Finally, install kubectl:

      • sudo apt-get install -y kubectl

      Test that the installation succeeded using version:

      Setting Up Shell Autocompletion

      Note: These commands have only been tested on an Ubuntu 18.04 machine. To learn how to set up autocompletion on other operating systems, consult Install and Set Up kubectl from the Kubernetes docs.

      kubectl includes a shell autocompletion script that you can make available to your system’s existing shell autocompletion software.

      Installing kubectl Autocompletion

      First, check if you have bash-completion installed:

      You should see some script output.

      Next, source the kubectl autocompletion script in your ~/.bashrc file:

      • echo 'source <(kubectl completion bash)' >>~/.bashrc
      • . ~/.bashrc

      Alternatively, you can add the completion script to the /etc/bash_completion.d directory:

      • kubectl completion bash >/etc/bash_completion.d/kubectl

      Usage

      To use the autocompletion feature, press the TAB key to display available kubectl commands:

      Output

      annotate apply autoscale completion cordon delete drain explain kustomize options port-forward rollout set uncordon api-resources attach certificate config cp describe . . .

      You can also display available commands after partially typing a command:

      Output

      delete describe diff drain

      Connecting, Configuring and Using Contexts

      Connecting

      To test that kubectl can authenticate with and access your Kubernetes cluster, use cluster-info:

      If kubectl can successfully authenticate with your cluster, you should see the following output:

      Output

      Kubernetes master is running at https://kubernetes_master_endpoint CoreDNS is running at https://coredns_endpoint To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

      kubectl is configured using kubeconfig configuration files. By default, kubectl will look for a file called config in the $HOME/.kube directory. To change this, you can set the $KUBECONFIG environment variable to a custom kubeconfig file, or pass in the custom file at execution time using the --kubeconfig flag:

      • kubectl cluster-info --kubeconfig=path_to_your_kubeconfig_file

      Note: If you’re using a managed Kubernetes cluster, your cloud provider should have made its kubeconfig file available to you.

      If you don’t want to use the --kubeconfig flag with every command, and there is no existing ~/.kube/config file, create a directory called ~/.kube in your home directory if it doesn’t already exist, and copy in the kubeconfig file, renaming it to config:

      • mkdir ~/.kube
      • cp your_kubeconfig_file ~/.kube/config

      Now, run cluster-info once again to test your connection.

      Modifying your kubectl Configuration

      You can also modify your config using the kubectl config set of commands.

      To view your kubectl configuration, use the view subcommand:

      Output

      apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED . . .

      Modifying Clusters

      To fetch a list of clusters defined in your kubeconfig, use get-clusters:

      • kubectl config get-clusters

      Output

      NAME do-nyc1-sammy

      To add a cluster to your config, use the set-cluster subcommand:

      • kubectl config set-cluster new_cluster --server=server_address --certificate-authority=path_to_certificate_authority

      To delete a cluster from your config, use delete-cluster:

      Note: This only deletes the cluster from your config and does not delete the actual Kubernetes cluster.

      • kubectl config delete-cluster

      Modifying Users

      You can perform similar operations for users using set-credentials:

      • kubectl config set-credentials username --client-certificate=/path/to/cert/file --client-key=/path/to/key/file

      To delete a user from your config, you can run unset:

      • kubectl config unset users.username

      Contexts

      A context in Kubernetes is an object that contains a set of access parameters for your cluster. It consists of a cluster, namespace, and user triple. Contexts allow you to quickly switch between different sets of cluster configuration.

      To see your current context, you can use current-context:

      • kubectl config current-context

      Output

      do-nyc1-sammy

      To see a list of all configured contexts, run get-contexts:

      • kubectl config get-contexts

      Output

      CURRENT NAME CLUSTER AUTHINFO NAMESPACE * do-nyc1-sammy do-nyc1-sammy do-nyc1-sammy-admin

      To set a context, use set-context:

      • kubectl config set-context context_name --cluster=cluster_name --user=user_name --namespace=namespace

      You can switch between contexts with use-context:

      • kubectl config use-context context_name

      Output

      Switched to context "do-nyc1-sammy"

      And you can delete a context with delete-context:

      • kubectl config delete-context context_name

      Using Namespaces

      A Namespace in Kubernetes is an abstraction that allows you to subdivide your cluster into multiple virtual clusters. By using Namespaces you can divide cluster resources among multiple teams and scope objects appropriately. For example, you can have a prod Namespace for production workloads, and a dev Namespace for development and test workloads.

      To fetch and print a list of all the Namespaces in your cluster, use get namespace:

      Output

      NAME STATUS AGE default Active 2d21h kube-node-lease Active 2d21h kube-public Active 2d21h kube-system Active 2d21h

      To set a Namespace for your current context, use set-context --current:

      • kubectl config set-context --current --namespace=namespace_name

      To create a Namespace, use create namespace:

      • kubectl create namespace namespace_name

      Output

      namespace/sammy created

      Similarly, to delete a Namespace, use delete namespace:

      Warning: Deleting a Namespace will delete everything in the Namespace, including running Deployments, Pods, and other workloads. Only run this command if you’re sure you’d like to kill whatever’s running in the Namespace or if you’re deleting an empty Namespace.

      • kubectl delete namespace namespace_name

      To fetch all Pods in a given Namespace or to perform other operations on resources in a given Namespace, make sure to include the --namespace flag:

      • kubectl get pods --namespace=namespace_name

      Managing Kubernetes Resources

      General Syntax

      The general syntax for most kubectl management commands is:

      • kubectl command type name flags

      Where

      • command is an operation you’d like to perform, like create
      • type is the Kubernetes resource type, like deployment
      • name is the resource’s name, like app_frontend
      • flags are any optional flags you’d like to include

      For example the following command retrieves information about a Deployment named app_frontend:

      • kubectl get deployment app_frontend

      Declarative Management and kubectl apply

      The recommended approach to managing workloads on Kubernetes is to rely on the cluster’s declarative design as much as possible. This means that instead of running a series of commands to create, update, delete, and restart running Pods, you should define the workloads, services, and systems you’d like to run in YAML manifest files, and provide these files to Kubernetes, which will handle the rest.

      In practice, this means using the kubectl apply command, which applies a particular configuration to a given resource. If the target resource doesn’t exist, then Kubernetes will create the resource. If the resource already exists, then Kubernetes will save the current revision, and update the resource according to the new configuration. This declarative approach exists in contrast to the imperative approach of running the kubectl create , kubectl edit, and the kubectl scale set of commands to manage resources. To learn more about the different ways of managing Kubernetes resources, consult Kubernetes Object Management from the Kubernetes docs.

      Rolling out a Deployment

      For example, to deploy the sample Nginx Deployment to your cluster, use apply and provide the path to the nginx-deployment.yaml manifest file:

      • kubectl apply -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      The -f flag is used to specify a filename or URL containing a valid configuration. If you’d like to apply all manifests from a directory, you can use the -k flag:

      • kubectl apply -k manifests_dir

      You can track the rollout status using rollout status:

      • kubectl rollout status deployment/nginx-deployment

      Output

      Waiting for deployment "nginx-deployment" rollout to finish: 1 of 2 updated replicas are available... deployment "nginx-deployment" successfully rolled out

      An alternative to rollout status is the kubectl get command, along with the -w (watch) flag:

      • kubectl get deployment -w

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/2 2 0 3s nginx-deployment 1/2 2 1 3s nginx-deployment 2/2 2 2 3s

      Using rollout pause and rollout resume, you can pause and resume the rollout of a Deployment:

      • kubectl rollout pause deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment paused
      • kubectl rollout resume deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment resumed

      Modifying a Running Deployment

      If you’d like to modify a running Deployment, you can make changes to its manifest file and then run kubectl apply again to apply the update. For example, we’ll modify the nginx-deployment.yaml file to change the number of replicas from 2 to 3:

      nginx-deployment.yaml

      . . .
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
      . . .
      

      The kubectl diff command allows you to see a diff between currently running resources, and the changes proposed in the supplied configuration file:

      • kubectl diff -f nginx-deployment.yaml

      Now allow Kubernetes to perform the update using apply:

      • kubectl apply -f nginx-deployment.yaml

      Running another get deployment should confirm the addition of a third replica.

      If you run apply again without modifying the manifest file, Kubernetes will detect that no changes were made and won’t perform any action.

      Using rollout history you can see a list of the Deployment’s previous revisions:

      • kubectl rollout history deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment REVISION CHANGE-CAUSE 1 <none>

      With rollout undo, you can revert a Deployment to any of its previous revisions:

      • kubectl rollout undo deployment/nginx-deployment --to-revision=1

      Deleting a Deployment

      To delete a running Deployment, use kubectl delete:

      • kubectl delete -f nginx-deployment.yaml

      Output

      deployment.apps "nginx-deployment" deleted

      Imperative Management

      You can also use a set of imperative commands to directly manipulate and manage Kubernetes resources.

      Creating a Deployment

      Use create to create an object from a file, URL, or STDIN. Note that unlike apply, if an object with the same name already exists, the operation will fail. The --dry-run flag allows you to preview the result of the operation without actually performing it:

      • kubectl create -f nginx-deployment.yaml --dry-run

      Output

      deployment.apps/nginx-deployment created (dry-run)

      We can now create the object:

      • kubectl create -f nginx-deployment.yaml

      Output

      deployment.apps/nginx-deployment created

      Modifying a Running Deployment

      Use scale to scale the number of replicas for the Deployment from 2 to 4:

      • kubectl scale --replicas=4 deployment/nginx-deployment

      Output

      deployment.extensions/nginx-deployment scaled

      You can edit any object in-place using kubectl edit. This will open up the object’s manifest in your default editor:

      • kubectl edit deployment/nginx-deployment

      You should see the following manifest file in your editor:

      nginx-deployment

      # Please edit the object below. Lines beginning with a '#' will be ignored,
      # and an empty file will abort the edit. If an error occurs while saving this file will be
      # reopened with the relevant failures.
      #
      apiVersion: extensions/v1beta1
      kind: Deployment
      . . . 
      spec:
        progressDeadlineSeconds: 600
        replicas: 4
        revisionHistoryLimit: 10
        selector:
          matchLabels:
      . . .
      

      Change the replicas value from 4 to 2, then save and close the file.

      Now run a get to inspect the changes:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 6m40s

      We’ve successfully scaled the Deployment back down to 2 replicas on-the-fly. You can update most of a Kubernetes’ object’s fields in a similar manner.

      Another useful command for modifying objects in-place is kubectl patch. Using patch, you can update an object’s fields on-the-fly without having to open up your editor. patch also allows for more complex updates with various merging and patching strategies. To learn more about these, consult Update API Objects in Place Using kubectl patch.

      The following command will patch the nginx-deployment object to update the replicas field from 2 to 4; deploy is shorthand for the deployment object.

      • kubectl patch deploy nginx-deployment -p '{"spec": {"replicas": 4}}'

      Output

      deployment.extensions/nginx-deployment patched

      We can now inspect the changes:

      • kubectl get deployment/nginx-deployment

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 4/4 4 4 18m

      You can also create a Deployment imperatively using the run command. run will create a Deployment using an image provided as a parameter:

      • kubectl run nginx-deployment --image=nginx --port=80 --replicas=2

      The expose command lets you quickly expose a running Deployment with a Kubernetes Service, allowing connections from outside your Kubernetes cluster:

      • kubectl expose deploy nginx-deployment --type=LoadBalancer --port=80 --name=nginx-svc

      Output

      service/nginx-svc exposed

      Here we’ve exposed the nginx-deployment Deployment as a LoadBalancer Service, opening up port 80 to external traffic and directing it to container port 80. We name the service nginx-svc. Using the LoadBalancer Service type, a cloud load balancer is automatically provisioned and configured by Kubernetes. To get the Service’s external IP address, use get:

      • kubectl get svc nginx-svc

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc LoadBalancer 10.245.26.242 203.0.113.0 80:30153/TCP 22m

      You can access the running Nginx containers by navigating to EXTERNAL-IP in your web browser.

      Inspecting Workloads and Debugging

      There are several commands you can use to get more information about workloads running in your cluster.

      Inspecting Kubernetes Resources

      kubectl get fetches a given Kubernetes resource and displays some basic information associated with it:

      • kubectl get deployment -o wide

      Output

      NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-deployment 4/4 4 4 29m nginx nginx app=nginx

      Since we did not provide a Deployment name or Namespace, kubectl fetches all Deployments in the current Namespace. The -o flag provides additional information like CONTAINERS and IMAGES.

      In addition to get, you can use describe to fetch a detailed description of the resource and associated resources:

      • kubectl describe deploy nginx-deployment

      Output

      Name: nginx-deployment Namespace: default CreationTimestamp: Wed, 11 Sep 2019 12:53:42 -0400 Labels: run=nginx-deployment Annotations: deployment.kubernetes.io/revision: 1 Selector: run=nginx-deployment . . .

      The set of information presented will vary depending on the resource type. You can also use this command without specifying a resource name, in which case information will be provided for all resources of that type in the current Namespace.

      explain allows you to quickly pull configurable fields for a given resource type:

      • kubectl explain deployment.spec

      By appending additional fields you can dive deeper into the field hierarchy:

      • kubectl explain deployment.spec.template.spec

      Gaining Shell Access to a Container

      To gain shell access into a running container, use exec. First, find the Pod that contains the running container you’d like access to:

      Output

      nginx-deployment-8859878f8-7gfw9 1/1 Running 0 109m nginx-deployment-8859878f8-z7f9q 1/1 Running 0 109m

      Let’s exec into the first Pod. Since this Pod has only one container, we don’t need to use the -c flag to specify which container we’d like to exec into.

      • kubectl exec -i -t nginx-deployment-8859878f8-7gfw9 -- /bin/bash

      Output

      root@nginx-deployment-8859878f8-7gfw9:/#

      You now have shell access to the Nginx container. The -i flag passes STDIN to the container, and -t gives you an interactive TTY. The -- double-dash acts as a separator for the kubectl command and the command you’d like to run inside the container. In this case, we are running /bin/bash.

      To run commands inside the container without opening a full shell, omit the -i and -t flags, and substitute the command you’d like to run instead of /bin/bash:

      • kubectl exec nginx-deployment-8859878f8-7gfw9 ls

      Output

      bin boot dev etc home lib lib64 media . . .

      Fetching Logs

      Another useful command is logs, which prints logs for Pods and containers, including terminated containers.

      To stream logs to your terminal output, you can use the -f flag:

      • kubectl logs -f nginx-deployment-8859878f8-7gfw9

      Output

      10.244.2.1 - - [12/Sep/2019:17:21:33 +0000] "GET / HTTP/1.1" 200 612 "-" "203.0.113.0" "-" 2019/09/16 17:21:34 [error] 6#6: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.2.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "203.0.113.0", referrer: "http://203.0.113.0" . . .

      This command will keep running in your terminal until interrupted with a CTRL+C. You can omit the -f flag if you’d like to print log output and exit immediately.

      You can also use the -p flag to fetch logs for a terminated container. When this option is used within a Pod that had a prior running container instance, logs will print output from the terminated container:

      • kubectl logs -p nginx-deployment-8859878f8-7gfw9

      The -c flag allows you to specify the container you’d like to fetch logs from, if the Pod has multiple containers. You can use the --all-containers=true flag to fetch logs from all containers in the Pod.

      Port Forwarding and Proxying

      To gain network access to a Pod, you can use port-forward:

      • sudo kubectl port-forward pod/nginx-deployment-8859878f8-7gfw9 80:80

      Output

      Forwarding from 127.0.0.1:80 -> 80 Forwarding from [::1]:80 -> 80

      In this case we use sudo because local port 80 is a protected port. For most other ports you can omit sudo and run the kubectl command as your system user.

      Here we forward local port 80 (preceding the colon) to the Pod’s container port 80 (after the colon).

      You can also use deploy/nginx-deployment as the resource type and name to forward to. If you do this, the local port will be forwarded to the Pod selected by the Deployment.

      The proxy command can be used to access the Kubernetes API server locally:

      • kubectl proxy --port=8080

      Output

      Starting to serve on 127.0.0.1:8080

      In another shell, use curl to explore the API:

      curl http://localhost:8080/api/
      

      Output

      { "kind": "APIVersions", "versions": [ "v1" ], "serverAddressByClientCIDRs": [ { "clientCIDR": "0.0.0.0/0", "serverAddress": "203.0.113.0:443" } ]

      Close the proxy by hitting CTRL-C.

      Conclusion

      This guide covers some of the more common kubectl commands you may use when managing a Kubernetes cluster and workloads you’ve deployed to it.

      You can learn more about kubectl by consulting the official Kubernetes reference documentation.

      There are many more commands and variations that you may find useful as part of your work with kubectl. To learn more about all of your available options, you can run:

      kubectl --help
      



      Source link

      Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode


      Updated by Linode

      Contributed by

      Linode

      Linode offers several pathways for users to easily deploy a Kubernetes cluster. If you prefer the command line, you can create a Kubernetes cluster with one command using the Linode CLI’s k8s-alpha plugin, and Terraform. Or, if you prefer a full featured GUI, Linode’s Rancher integration enables you to deploy and manage Kubernetes clusters with a simple web interface. The Linode Kubernetes Engine, currently under development with an early access beta version on its way this summer, allows you to spin up a Kubernetes cluster with Linode handling the management and maintenance of your control plane. These are all great options for production ready deployments.

      Kubeadm is a cloud provider agnostic tool that automates many of the tasks required to get a cluster up and running. Users of kubeadm can run a few simple commands on individual servers to turn them into a Kubernetes cluster consisting of a master node and worker nodes. This guide will walk you through installing kubeadm and using it to deploy a Kubernetes cluster on Linode. While the kubeadm approach requires more manual steps than other Kubernetes cluster creation pathways offered by Linode, this solution will be covered as way to dive deeper into the various components that make up a Kubernetes cluster and the ways in which they interact with each other to provide a scalable and reliable container orchestration mechanism.

      Note

      This guide’s example instructions will result in the creation of three billable Linodes. Information on how to tear down the Linodes are provided at the end of the guide. Interacting with the Linodes via the command line will provide the most opportunity for learning, however, this guide is written so that users can also benefit by reading along.

      Before You Begin

      1. Deploy three Linodes running Ubuntu 18.04 with the following system requirements:

        • One Linode to use as the master Node with 4GB RAM and 2 CPU cores.
        • Two Linodes to use as the Worker Nodes each with 1GB RAM and 1 CPU core.
      2. Follow the Getting Started and the Securing Your Server guides for instructions on setting up your Linodes. The steps in this guide assume the use of a limited user account with sudo privileges.

      Note

      When following the Getting Started guide, make sure that each Linode is using a different hostname. Not following this guideline will leave you unable to join some or all nodes to the cluster in a later step.
      1. Disable swap memory on your Linodes. Kubernetes requires that you disable swap memory on any cluster nodes to prevent the Kubernetes scheduler (kube-scheduler) from ever sending a pod to a node that has run out of CPU/memory or reached its designated CPU/memory limit.

        sudo swapoff -a
        

        Verify that your swap has been disabled. You should expect to see a value of 0 returned.

        cat /proc/meminfo | grep 'SwapTotal'
        

        To learn more about managing compute resources for containers, see the official Kubernetes documentation.

      2. Read the Beginners Guide to Kubernetes to familiarize yourself with the major components and concepts of Kubernetes. The current guide assumes a working knowledge of common Kubernetes concepts and terminology.

      Build a Kubernetes Cluster

      Kubernetes Cluster Architecture

      A Kubernetes cluster consists of a master node and worker nodes. The master node hosts the control plane, which is the combination of all the components that provide it the ability to maintain the desired cluster state. This cluster state is defined by manifest files and the kubectl tool. While the control plane components can be run on any cluster node, it is a best practice to isolate the control plane on its own node and to run any application containers on a separate worker node. A cluster can have a single worker node or up to 5000. Each worker node must be able to maintain running containers in a pod and be able to communicate with the master node’s control plane.

      The table below provides a list of the Kubernetes tooling you will need to install on your master and worker nodes in order to meet the minimum requirements for a functioning Kubernetes cluster as described above.

      Tool Description Master Node Worker Nodes
      kubeadm This tool provides a simple way to create a Kubernetes cluster by automating the tasks required to get a cluster up and running. New Kubernetes users with access to a cloud hosting provider, like Linode, can use kubeadm to build out a playground cluster. kubeadm is also used as a foundation to create more mature Kubernetes deployment tooling. x x
      Container Runtime A container runtime is responsible for running the containers that make up a cluster’s pods. This guide will use Docker as the container runtime. x x
      kubelet kubelet ensures that all pod containers running on a node are healthy and meet the specifications for a pod’s desired behavior. x x
      kubectl A command line tool used to manage a Kubernetes cluster. x x
      Control Plane Series of services that form Kubernetes master structure that allow it to control the cluster. Kubeadm allows the control plane services to run as containers on the master node. The control plane will be created when you initialize kubeadm later in this guide. x

      Install the Container Runtime: Docker

      Docker is the software responsible for running the pod containers on each node. You can use other container runtime software with Kubernetes, such as Containerd and CRI-O. You will need to install Docker on all three Linodes.

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine docker.io
        
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
        
      3. Add Docker’s GPG key:

        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88
        

        You should see output similar to the following:

          
        pub   4096R/0EBFCD88 2017-02-22
                Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) 
        sub   4096R/F273FCD8 2017-02-22
        
        
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
        
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
        
      7. Add your limited Linux user account to the docker group. Replace $USER with your username:

        sudo usermod -aG docker $USER
        

        Note

        After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        sudo docker run hello-world
        
      9. Setup the Docker daemon to use systemd as the cgroup driver, instead of the default cgroupfs. This is a recommended step so that Kubelet and Docker are both using the same cgroup manager. This will make it easier for Kubernetes to know which resources are available on your cluster’s nodes.

        sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
        {
          "exec-opts": ["native.cgroupdriver=systemd"],
          "log-driver": "json-file",
          "log-opts": {
            "max-size": "100m"
          },
          "storage-driver": "overlay2"
        }
        EOF'
        
      10. Create a systemd directory for Docker:

        sudo mkdir -p /etc/systemd/system/docker.service.d
        
      11. Restart Docker:

        sudo systemctl daemon-reload
        sudo systemctl restart docker
        

      Install kubeadm, kubelet, and kubectl

      Complete the steps outlined in this section on all three Linodes.

      1. Update the system and install the required dependencies for installation:

        sudo apt-get update && sudo apt-get install -y apt-transport-https curl
        
      2. Add the required GPG key to your apt-sources keyring to authenticate the Kubernetes related packages you will install:

        curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
        
      3. Add Kubernetes to the package manager’s list of sources:

        sudo bash -c "cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
        deb https://apt.kubernetes.io/ kubernetes-xenial main
        EOF"
        
      4. Update apt, install Kubeadm, Kubelet, and Kubectl, and hold the installed packages at their installed versions:

        sudo apt-get update
        sudo apt-get install -y kubelet kubeadm kubectl
        sudo apt-mark hold kubelet kubeadm kubectl
        
      5. Verify that kubeadm, kubelet, and kubectl have installed by retrieving their version information. Each command should return version information about each package.

        kubeadm version
        kubelet --version
        kubectl version
        

      Set up the Kubernetes Control Plane

      After installing the Kubernetes related tooling on all your Linodes, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.

      The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the Beginner’s Guide to Kubernetes.

      In addition to the baseline control plane components, there are several addons, that can be installed on the master node to access additional cluster features. You will need to install a networking and network policy provider add on that will implement Kubernetes’ network model on the cluster’s pod network.

      This guide will use Calico as the pod network add on. Calico is a secure and open source L3 networking and network policy provider for containers. There are several other network and network policy providers to choose from. To view a full list of providers, refer to the official Kubernetes documentation.

      Note

      kubeadm only supports Container Network Interface (CNI) based networks. CNI consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers

      1. Initialize kubeadm on the master node. This command will run checks against the node to ensure it contains all required Kubernetes dependencies, if the checks pass, it will then install the control plane components.

        When issuing this command, it is necessary to set the pod network range that Calico will use to allow your pods to communicate with each other. It is recommended to use the private IP address space, 10.2.0.0/16.

        Note

        The pod network IP range should not overlap with the service IP network range. The default service IP address range is 10.96.0.0/12. You can provide an alternative service ip address range using the --service-cidr=10.97.0.0/12 option when initializing kubeadm. Replace 10.97.0.0/12 with the desired service IP range.

        For a full list of available kubeadm initialization options, see the official Kubernetes documentation.

        sudo kubeadm init --pod-network-cidr=10.2.0.0/16
        

        You should see a similar output:

          
        Your Kubernetes control-plane has initialized successfully!
        
        To start using your cluster, you need to run the following as a regular user:
        
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
        You should now deploy a pod network to the cluster.
        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
          https://kubernetes.io/docs/concepts/cluster-administration/addons/
        
        Then you can join any number of worker nodes by running the following on each as root:
        
        kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
            --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
              
        

        The kubeadm join command will be used in the Join a Worker Node to the Cluster section of this guide to bootstrap the worker nodes to the Kubernetes cluster. This command should be kept handy for later use. Below is a description of the required options you will need to pass in with the kubeadm join command:

        • The master node’s IP address and the Kubernetes API server’s port number. In the example output, this is 192.0.2.0:6443. The Kubernetes API server’s port number is 6443 by default on all Kubernetes installations.
        • A bootstrap token. The bootstrap token has a 24-hour TTL (time to live). A new bootstrap token can be generated if your current token expires.
        • A CA key hash. This is used to verify the authenticity of the data retrieved from the Kubernetes API server during the bootstrap process.
      2. Copy the admin.conf configuration file to your limited user account. This file allows you to communicate with your cluster via kubectl and provides superuser privileges over the cluster. It contains a description of the cluster, users, and contexts. Copying the admin.conf to your limited user account will provide you with administrative privileges over your cluster.

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
      3. Install the necessary Calico manifests to your master node and apply them using kubectl. The first file, rbac-kdd.yaml, works with Kubernetes’ role-based access control (RBAC) to provide Calico components access to necessary parts of the Kubernetes API. The second file, calico.yaml, configures a self-hosted Calico installation that uses the Kubernetes API directly as the datastore (instead of etcd).

        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
        

      Inspect the Master Node with Kubectl

      After completing the previous section, your Kubernetes master node is ready with all the necessary components to manage a cluster. To gain a better understanding of all the parts that make up the master’s control plane, this section will walk you through inspecting your master node. If you have not yet reviewed the Beginner’s Guide to Kubernetes, it will be helpful to do so prior to continuing with this section as it relies on the understanding of basic Kubernetes concepts.

      1. View the current state of all nodes in your cluster. At this stage, the only node you should expect to see is the master node, since worker nodes have yet to be bootstrapped. A STATUS of Ready indicates that the master node contains all necessary components, including the pod network add-on, to start managing clusters.

        kubectl get nodes
        

        Your output should resemble the following:

          
        NAME        STATUS     ROLES     AGE   VERSION
        kube-master   Ready     master      1h    v1.14.1
            
        
      2. Inspect the available namespaces in your cluster.

        kubectl get namespaces
        

        Your output should resemble the following:

          
        NAME              STATUS   AGE
        default           Active   23h
        kube-node-lease   Active   23h
        kube-public       Active   23h
        kube-system       Active   23h
            
        

        Below is an overview of each namespace installed by default on the master node by kubeadm:

        • default: The default namespace contains objects with no other assigned namespace. By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
        • kube-system: The namespace for objects created by the Kubernetes system. This includes all resources used by the master node.
        • kube-public: This namespace is created automatically and is readable by all users. It contains information, like certificate authority data (CA), that helps kubeadm join and authenticate worker nodes.
        • kube-node-lease: The kube-node-lease namespace contains lease objects that are used by kubelet to determine node health. kubelet creates and periodically renews a Lease on a node. The node lifecycle controller treats this lease as a health signal. kube-node-lease was released to beta in Kubernetes 1.14.
      3. View all resources available in the kube-system namespace. The kube-system namespace contains the widest range of resources, since it houses all control plane resources. Replace kube-system with another namespace to view its corresponding resources.

        kubectl get all -n kube-system
        

      Join a Worker Node to the Cluster

      Now that your Kubernetes master node is set up, you can join worker nodes to your cluster. In order for a worker node to join a cluster, it must trust the cluster’s control plane, and the control plane must trust the worker node. This trust is managed via a shared bootstrap token and a certificate authority (CA) key hash. kubeadm handles the exchange between the control plane and the worker node. At a high-level the worker node bootstrap process is the following:

      1. kubeadm retrieves information about the cluster from the Kubernetes API server. The bootstrap token and CA key hash are used to ensure the information originates from a trusted source.

      2. kubelet can take over and begin the bootstrap process, since it has the necessary cluster information retrieved in the previous step. The bootstrap token is used to gain access to the Kubernetes API server and submit a certificate signing request (CSR), which is then signed by the control plane.

      3. The worker node’s kubelet is now able to connect to the Kubernetes API server using the node’s established identity.

      Before continuing, you will need to make sure that you know your Kubernetes API server’s IP address, that you have a bootstrap token, and a CA key hash. This information was provided when kubeadm was initialized on the master node in the Set up the Kubernetes Control Plane section of this guide. If you no longer have this information, you can regenerate the necessary information from the master node.


      Regenerate a Bootstrap Token

      These commands should be issued from your master node.

      1. Generate a new bootstrap token and display the kubeadm join command with the necessary options to join a worker node to the master node’s control plane:

        kubeadm token create --print-join-command
        

      Follow the steps below on each node you would like to bootstrap to the cluster as a worker node.

      1. SSH into the Linode that will be used as a worker node in the Kubernetes cluster.

        ssh username@192.0.2.1
        
      2. Join the node to your cluster using kubeadm. Ensure you replace 192.0.2.0:6443 with the IP address for your master node along with its Kubernetes API server’s port number, udb8fn.nih6n1f1aijmbnx5 with your bootstrap token, and sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26 with your CA key hash. The bootstrap process will take a few moments.

        sudo kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
        --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
        

        When the bootstrap process has completed, you should see a similar output:

          
          This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
        
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
              
        
      3. Repeat the steps outlined above on the second worker node to bootstrap it to the cluster.

      4. SSH into the master node and verify the worker nodes have joined the cluster:

         kubectl get nodes
        

        You should see a similar output.

          
        NAME          STATUS   ROLES    AGE     VERSION
        kube-master   Ready    master   1d22h   v1.14.1
        kube-node-1   Ready       1d22h   v1.14.1
        kube-node-2   Ready       1d22h   v1.14.1
              
        

      Next Steps

      Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, and deploy services that are exposed to the public internet. To help you get started with this, move on to follow along with the Deploy a Static Site on Linode using Kubernetes guide.

      Tear Down Your Cluster

      If you are done experimenting with your Kubernetes Cluster, be sure to remove the Linodes you have running in order to avoid being further billed for them. See the Removing Services section of the Billing and Payments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link