One place for hosting & domains

      How to Migrate From k8s-alpha CLI to Terraform


      Updated by Linode Contributed by Linode

      The k8s-alpha CLI is deprecated. On March 31st, 2020, it will be removed from the linode-cli. After March 31, 2020, you will no longer be able to create or manage clusters created by the k8s-alpha CLI plugin, however, you will still be able to successfully manage your clusters using the Kubernetes Terraform installer for Linode Instances.

      In This Guide

      You will use the Kubernetes Terraform installer for Linode Instances to continue to manage and support clusters created using the k8s-alpha CLI plugin following the EOL date and beyond. You will learn how to:

      Manage k8s-alpha Clusters

      The k8s-alpha CLI plugin was based on Terraform. As a result, it created a number of Terraform configuration files whenever it created a cluster. These Terraform files are found within the .k8s-alpha-linode directory. You can change into this directory using the following syntax:

      cd $HOME/.k8s-alpha-linode
      

      If you list the contents of this directory, you will see a subdirectory for each of the clusters you’ve created with the k8s-alpha CLI plugin. For any of your clusters, contents of these subdirectories will be as follows:

        
      drwxr-xr-x  5 username  staff   160 Dec 11 08:10 .terraform
      -rw-r--r--  1 username  staff   705 Dec 11 08:10 cluster.tf
      -rw-r--r--  1 username  staff  5456 Dec 11 08:14 example-cluster.conf
      -rw-r--r--  1 username  staff  5488 Dec 11 08:16 example-cluster_new.conf
      drwxr-xr-x  3 username  staff    96 Dec 11 08:10 terraform.tfstate.d
      
      
      • Both of the .conf files are kubeconfig files for this cluster.
      • terraform.tfstate.d is a Terraform state directory.
      • .terraform is a hidden directory which contains Terraform configuration files.
      • cluster.tf is the Terraform module file. This is the most important file here because it will allow you to scale, upgrade, and delete your cluster.

      Note

      Scale a Cluster

      1. Open cluster.tf with the text editor of your choice. The contents will be similar to the following:

        cluster.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        
        variable "server_type_node" {
          default = "g6-standard-2"
        }
        variable "nodes" {
          default = 3
        }
        variable "server_type_master" {
          default = "g6-standard-2"
        }
        variable "region" {
          default = "us-east"
        }
        variable "ssh_public_key" {
          default = "/Users/username/.ssh/id_rsa.pub"
        }
        module "k8s" {
          source  = "git::https://github.com/linode/terraform-linode-k8s.git?ref=for-cli"
        
          linode_token = "<your api token>"
        
          linode_group = "kabZmZ3TA0r-mycluster"
        
          server_type_node = "${var.server_type_node}"
        
          nodes = "${var.nodes}"
        
          server_type_master = "${var.server_type_master}"
        
          region = "${var.region}"
        
          ssh_public_key = "${var.ssh_public_key}"
        }
      2. To scale your cluster, edit the value of the nodes variable. To resize the number of nodes from 3 to 5, make the following edit and save your changes:

        variable "nodes" {
          default = 5
        }
        
      3. Once your edit is made, move back to the /.k8s-alpha-linode/clustername directory and apply your changes with Terraform:

        terraform apply
        

        Note

        You may need to use the original Terraform version used to deploy the cluster (either Terraform 0.11.X or Terraform 0.12.X). If you do not, you will see syntax errors.

      4. Once this is completed, you’ll see a prompt reviewing your changes and asking if you would like to accept them. To proceed, type yes and Terraform will proceed to make the changes. This process may take a few moments.

        Note

        When prompted, you may notice that one item in your plan is marked as destroy. This is generally a “null resource” or a local script execution and is not indicative of an unintended change.

      5. After Terraform has finished, your cluster will be resized. To confirm, enter the following command to list all nodes in your cluster, replacing the string mycluster with the name of the cluster you edited:

        kubectl --kubeconfig=mycluster.conf get nodes
        
      6. The output will then list an entry for each node:

          
        kubectl --kubeconfig=mycluster.conf get nodes
        NAME             	STATUS   ROLES	  AGE 	  VERSION
        mycluster-master-1      Ready	 master   21m 	  v1.13.6
        mycluster-node-1 	Ready	 <none>   18m 	  v1.13.6
        mycluster-node-2 	Ready	 <none>   18m 	  v1.13.6
        mycluster-node-3 	Ready	 <none>   18m 	  v1.13.6
        mycluster-node-4 	Ready	 <none>   4m26s   v1.13.6
        mycluster-node-5 	Ready	 <none>   4m52s   v1.13.6
        
        

      Upgrade a Cluster

      You may have noticed that the Terraform module file, cluster.tf, refers to a specific branch or git commit hash referencing the remote Kubernetes Terraform installer for Linode Instances module on GitHub. The following section will outline how to upgrade your cluster to the latest version.

      For example, your source variable may have a value that points to the git branch ref for-cli. To perform an upgrade this must point to the latest commit history hash.

      1. Visit the branch of the Terraform module you’re using on Github. Note the commit history and copy the latest hash by clicking on the clipboard next to the hash of the most recent commit. At the time of this writing, the most recent hash is as follows:

          
        5e68ff7beee9c36aa4a4f5599f3973f753b1cd9e
        
        
      2. Edit cluster.tf to prepare for the upgrade. Update the following section using the hash you copied to appear as follows:

        cluster.tf
        1
        
        source = "git::https://github.com/linode/terraform-linode-k8s.git?ref=5e68ff7beee9c36aa4a4f5599f3973f753b1cd9e"
      3. To apply these changes, re-initialize the module by running the following command:

        terraform init
        
      4. Once this has completed, apply your changes with the following command:

        terraform apply
        

        Note

        Depending on the changes that have been configured, you may or may not see the upgrade perform actions. For example, in this case, because the only change was to the Kubernetes version, no actions were taken.

      Delete a Cluster

      To destroy a cluster, navigate to the directory containing your cluster’s files, and enter the terraform destroy command:

      cd ~/.k8s-alpha-linode/mycluster
      terraform destroy
      

      Terraform will prompt you to confirm the action, and on confirmation will proceed to destroy all associated resources. If this process is interrupted for any reason, you can run the command again at any time to complete the process.

      Create a Cluster

      1. Create a new directory to house the new cluster’s configuration files in the ~/.k8s-alpha-linode directory. In this example the cluster name is mynewcluster:

        cd ~/.k8s-alpha-linode
        mkdir mynewcluster
        
      2. Create a Terraform module file in this new directory called cluster.tf with your desired configuration. Replace the values for the variables ssh_public_key and linode_token with your own unique values and feel free to change the configuration values for the cluster itself. The example configuration below will create a cluster with a 4GB master, with a node pool with three 4GB Linodes, hosted in the us-east region:

        cluster.tf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        
        variable "server_type_node" {
          default = "g6-standard-2"
        }
        variable "nodes" {
          default = 3
        }
        variable "server_type_master" {
          default = "g6-standard-2"
        }
        variable "region" {
          default = "us-east"
        }
        variable "ssh_public_key" {
          default = "/Users/username/.ssh/id_rsa.pub"
        }
        module "k8s" {
          source  = "git::https://github.com/linode/terraform-linode-k8s.git?ref=for-cli"
        
          linode_token = "<your api token>"
        
          linode_group = "mynewcluster"
        
          server_type_node = "${var.server_type_node}"
        
          nodes = "${var.nodes}"
        
          server_type_master = "${var.server_type_master}"
        
          region = "${var.region}"
        
          ssh_public_key = "${var.ssh_public_key}"
        }
      3. Initialize and apply your new Terraform configuration:

        terrform workspace new mynewcluster
        terraform init
        terraform apply
        
      4. When prompted, review your changes and enter yes to continue.

      5. Once completed, you’ll see a kubeconfig file, mynewcluster.conf, in this directory.

      6. To complete your deployment, you will use this kubeconfig file and export it using the following syntax:

        export KUBECONFIG=$(pwd)/mynewcluster.conf
        kubectl get pods --all-namespaces
        
      7. Once completed, you should see your new cluster active and available with similar output:

          
        NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
        kube-system   calico-node-4kp2d                               2/2     Running   0          22m
        kube-system   calico-node-84fj7                               2/2     Running   0          21m
        kube-system   calico-node-nnns7                               2/2     Running   0          21m
        kube-system   calico-node-xfkvs                               2/2     Running   0          23m
        kube-system   ccm-linode-c66gk                                1/1     Running   0          23m
        kube-system   coredns-54ff9cd656-jqszt                        1/1     Running   0          23m
        kube-system   coredns-54ff9cd656-zvgbd                        1/1     Running   0          23m
        kube-system   csi-linode-controller-0                         3/3     Running   0          23m
        kube-system   csi-linode-node-2tbcd                           2/2     Running   0          21m
        kube-system   csi-linode-node-gfvgx                           2/2     Running   0          21m
        kube-system   csi-linode-node-lbt5s                           2/2     Running   0          21m
        kube-system   etcd-mynewcluster-master-1                      1/1     Running   0          22m
        kube-system   external-dns-d4cfd5855-25x65                    1/1     Running   0          23m
        kube-system   kube-apiserver-mynewcluster-master-1            1/1     Running   0          22m
        kube-system   kube-controller-manager-mynewcluster-master-1   1/1     Running   0          22m
        kube-system   kube-proxy-29sgx                                1/1     Running   0          21m
        kube-system   kube-proxy-5w78s                                1/1     Running   0          22m
        kube-system   kube-proxy-7ptxp                                1/1     Running   0          21m
        kube-system   kube-proxy-7v8pr                                1/1     Running   0          23m
        kube-system   kube-scheduler-mynewcluster-master-1            1/1     Running   0          22m
        kube-system   kubernetes-dashboard-57df4db6b-rtzvm            1/1     Running   0          23m
        kube-system   metrics-server-68d85f76bb-68bl5                 1/1     Running   0          23m
        
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Deploy Kubernetes on Linode with the k8s-alpha CLI


      Updated by Linode

      Written by Linode

      Deploy Kubernetes on Linode with the k8s-alpha CLI

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to delete it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

      What is the k8s-alpha CLI?

      The Linode k8s-alpha CLI is a plugin for the Linode CLI that offers quick, single-command deployments of Kubernetes clusters on your Linode account. When you have it installed, creating a cluster can be as simple as:

      linode-cli k8s-alpha create example-cluster
      

      The clusters that it creates are pre-configured with useful Linode integrations, like our CCM, CSI, and ExternalDNS plugins. As well, the Kubernetes metrics-server is pre-installed, so you can run kubectl top. Nodes in your clusters will also be labeled with the Linode Region and Linode Type, which can also be used by Kubernetes controllers for the purposes of scheduling pods.


      What are Linode’s CCM, CSI, and ExternalDNS plugins?

      The CCM (Cloud Controller Manager), CSI (Container Storage Interface), and ExternalDNS plugins are Kubernetes addons published by Linode. You can use them to create NodeBalancers, Block Storage Volumes, and DNS records through your Kubernetes manifests.

      The k8s-alpha CLI will create two kinds of nodes on your account:

      • Master nodes will run the components of your Kubernetes control plane, and will also run etcd.

      • Worker nodes will run your workloads.

      These nodes will all exist as billable services on your account. You can specify how many master and worker nodes are created and also your nodes’ Linode plan and the data center they are located in.

      Alternatives for Creating Clusters

      Another easy way to create clusters is with Rancher. Rancher is a web application that provides a GUI interface for cluster creation and for management of clusters. Rancher also provides easy interfaces for deploying and scaling apps on your clusters, and it has a built-in catalog of curated apps to choose from.

      To get started with Rancher, review our How to Deploy Kubernetes on Linode with Rancher 2.2 guide. Rancher is capable of importing clusters that were created outside of it, so you can still use it even if you create your clusters through the k8s-alpha CLI or some other means.

      Beginners Resources

      If you haven’t used Kubernetes before, we recommend reading through our introductory guides on the subject:

      Before You Begin

      1. You will need to have a personal access token for Linode’s API. If you don’t have one already, follow the Get an Access Token section of our API guide and create a token with read/write permissions.

      2. If you do not already have a public-private SSH key pair, you will need to generate one. Follow the Generate a Key Pair section of our Public Key Authentication guide for instructions.

        Note

        If you’re unfamiliar with the concept of public-private key pairs, the introduction to our Public Key Authentication guide explains what they are.

      Install the k8s-alpha CLI

      The k8s-alpha CLI is bundled with the Linode CLI, and using it requires the installation and configuration of a few dependencies:

      • Terraform: The k8s-alpha CLI creates clusters by defining a resource plan in Terraform and then having Terraform create those resources. If you’re interested in how Terraform works, you can review our Beginner’s Guide to Terraform, but doing so is not required to use the k8s-alpha CLI.

        Note

      • kubectl: kubectl is the client software for Kubernetes, and it is used to interact with your Kubernetes cluster’s API.

      • SSH agent: Terraform will rely on public-key authentication to connect to the Linodes that it creates, and you will need to configure your SSH agent on your computer with the keys that Terraform should use.

      Install the Linode CLI

      Follow the Install the CLI section of our CLI guide to install the Linode CLI. If you already have the CLI, upgrade it to the latest version available:

      pip install --upgrade linode-cli
      

      Install Terraform

      Follow the instructions in the Install Terraform section of our Use Terraform to Provision Linode Environments guide.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest Kubernetes release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Configure your SSH Agent

      Your SSH key pair is stored in your home directory (or another location), but the k8s-alpha CLI’s Terraform implementation will not be able to reference your keys without first communicating your keys to Terraform. To communicate your keys to Terraform, you’ll first start the ssh-agent process. ssh-agent will cache your private keys for other processes, including keys that are passphrase-protected.

      Linux: Run the following command; if you stored your private key in another location, update the path that’s passed to ssh-add accordingly:

      eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
      

      Note

      You will need to run all of your k8s-alpha CLI commands from the terminal that you start the ssh-agent process in. If you start a new terminal, you will need to run the commands in this step again before using the k8s-alpha CLI.

      macOS: macOS has an ssh-agent process that persists across all of your terminal sessions, and it can store your private key passphrases in the operating system’s Keychain Access service.

      1. Update your ~/.ssh/config SSH configuration file. This configuration will add keys to the persistent agent and store passphrases in the OS keychain:

        ~/.ssh/config
        1
        2
        3
        4
        
        Host *
          AddKeysToAgent yes
          UseKeychain yes
          IdentityFile ~/.ssh/id_rsa

      Note

      Although kubectl should be used in all cases possible to interact with nodes in your cluster, the key pair cached in the ssh-agent process will enable you to access individual nodes via SSH as the core user.

      1. Add your key to the ssh-agent process:

        ssh-add -K ~/.ssh/id_rsa
        

      Create a Cluster

      1. To create your first cluster, run:

        linode-cli k8s-alpha create example-cluster
        
      2. Your terminal will show output related to the Terraform plan for your cluster. The output will halt with the following messages and prompt:

        Plan: 5 to add, 0 to change, 0 to destroy.
        
        Do you want to perform these actions in workspace "example-cluster"?
          Terraform will perform the actions described above.
          Only 'yes' will be accepted to approve.
        
          Enter a value:
        

        Note

        Your Terraform configurations will be stored under ~/.k8s-alpha-linode/

      3. Enter yes at the Enter a value: prompt. The Terraform plan will be applied over the next few minutes.

        Note

        You may see an error like the following:

          
        Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.
        
        

        If this appears, then you have run into a limit on the number of resources allowed on your Linode account. If this is the case, or if your nodes do not appear in the Linode Cloud Manager as expected, contact Linode Support. This limit also applies to Block Storage Volumes and NodeBalancers, which some of your cluster app deployments may try to create.

      4. When the operation finishes, you will see options like the following:

        Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
        Switched to context "example-cluster-4-kacDTg9RmZK@example-cluster-4".
        Your cluster has been created and your kubectl context updated.
        
        Try the following command:
        kubectl get pods --all-namespaces
        
        Come hang out with us in #linode on the Kubernetes Slack! http://slack.k8s.io/
        
      5. If you visit the Linode Cloud Manager, you will see your newly created cluster nodes on the Linodes page. By default, your Linodes will be created under the region and Linode plan that you have set as the default for your Linode CLI. To set new defaults for your Linode CLI, run:

        linode-cli configure
        

        The k8s-alpha CLI will conform to your CLI defaults, with the following exceptions:

        • If you set a default plan size smaller than Linode 4GB, the k8s-alpha CLI will create your master node(s) on the Linode 4GB plan, which is the minimum recommended for master nodes. It will still create your worker nodes using your default plan.

        • The k8s-alpha CLI will always create nodes running CoreOS (instead of the default distribution that you set).

      6. The k8s-alpha CLI will also update your kubectl client’s configuration (the kubeconfig file) to allow immediate access to the cluster. Review the Manage your Clusters with kubectl section for further instructions.

      Cluster Creation Options

      The following optional arguments are available:

      linode-cli k8s-alpha create example-cluster-2 --node-type g6-standard-1 --nodes 6 --master-type g6-standard-4 --region us-east --ssh-public-key $HOME/.ssh/id_rsa.pub
      
      Argument                                           Description
      --node-type TYPE The Linode Type ID for cluster worker nodes (which you can retrieve by running linode-cli linodes types).
      --nodes COUNT The number of Linodes to deploy as Nodes in the cluster (default 3).
      --master-type TYPE The Linode Type ID for cluster master nodes (which you can retrieve by running linode-cli linodes types).
      --region REGION The Linode Region ID in which to deploy the cluster (which you can retrieve by running linode-cli regions list).
      --ssh-public-key KEYPATH The path to your public key file which will be used to access Nodes during initial provisioning only! The keypair must be added to an ssh-agent (default $HOME/.ssh/id_rsa.pub).

      Delete a Cluster

      1. To delete a cluster, run the delete command with the name of your cluster:

        linode-cli k8s-alpha delete example-cluster
        
      2. Your terminal will show output from Terraform that describes the deletion operation. The output will halt with the following messages and prompt:

        Plan: 0 to add, 0 to change, 5 to destroy.
        
        Do you really want to destroy all resources in workspace "example-cluster"?
          Terraform will destroy all your managed infrastructure, as shown above.
          There is no undo. Only 'yes' will be accepted to confirm.
        
          Enter a value:
        
      3. Enter yes at the Enter a value: prompt. The nodes in your cluster will be deleted over the next few minutes.

      4. You should also login to the Linode Cloud Manager and confirm that any Volumes and NodeBalancers created by any of your cluster app deployments.

      5. Deleting the cluster will not remove the kubectl client configuration that the CLI inserted into your kubeconfig file. Review the Remove a Cluster’s Context section if you’d like to remove this information.

      Manage your Clusters with kubectl

      The k8s-alpha CLI will automatically configure your kubectl client to connect to your cluster. Specifically, this connection information is stored in your kubeconfig file. The path for this file is normally ~/.kube/config.

      Use the kubectl client to interact with your cluster’s Kubernetes API. This will work in the same way as with any other cluster. For example, you can get all the pods in your cluster:

      kubectl get pods --all-namespaces
      

      Review the Kubernetes documentation for more information about how to use kubectl.

      Switch between Cluster Contexts

      If you have more than one cluster set up, you can switch your kubectl client between them. To list all of your cluster contexts:

      kubectl config get-contexts
      

      An asterisk will appear before the current context:

      CURRENT   NAME                                                      CLUSTER                 AUTHINFO                            NAMESPACE
      *         example-cluster-kat7BqBBgU8@example-cluster               example-cluster         example-cluster-kat7BqBBgU8
                example-cluster-2-kacDTg9RmZK@example-cluster-2           example-cluster-2       example-cluster-2-kacDTg9RmZK
      

      To switch to another context, use the use-context subcommand and pass the value under the NAME column:

      kubectl config use-context example-cluster-2-kacDTg9RmZK@example-cluster-2
      

      All kubectl commands that you issue will now apply to the cluster you chose.

      Remove a Cluster’s Context

      When you delete a cluster with the k8s-alpha CLI, its connection information will persist in your local kubeconfig file, and it will still appear when you run kubectl config get-contexts. To remove this connection data, run the following commands:

      kubectl config delete-cluster example-cluster
      kubectl config delete-context example-cluster-kat7BqBBgU8@example-cluster
      kubectl config unset users.example-cluster-kat7BqBBgU8
      
      • For the delete-cluster subcommand, supply the value that appears under the CLUSTER column in the output from get-contexts.

      • For the delete-context subcommand, supply the value that appears under the NAME column in the output from get-contexts.

      • For the unset subcommand, supply users.<AUTHINFO>, where <AUTHINFO> is the value that appears under the AUTHINFO column in the output from get-contexts.

      Next Steps

      Now that you have a cluster up and running, you’re ready to start deploying apps to it. Review our other Kubernetes guides for help with deploying software and managing your cluster:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Deploy Kubernetes on Linode with the k8s-alpha CLI


      Updated by Linode Written by Linode

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to delete it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

      What is the k8s-alpha CLI?

      The Linode k8s-alpha CLI is a plugin for the Linode CLI that offers quick, single-command deployments of Kubernetes clusters on your Linode account. When you have it installed, creating a cluster can be as simple as:

      linode-cli k8s-alpha create example-cluster
      

      The clusters that it creates are pre-configured with useful Linode integrations, like our CCM, CSI, and ExternalDNS plugins. As well, the Kubernetes metrics-server is pre-installed, so you can run kubectl top. Nodes in your clusters will also be labeled with the Linode Region and Linode Type, which can also be used by Kubernetes controllers for the purposes of scheduling pods.

      What are Linode’s CCM, CSI, and ExternalDNS plugins?

      The CCM (Cloud Controller Manager), CSI (Container Storage Interface), and ExternalDNS plugins are Kubernetes addons published by Linode. You can use them to create NodeBalancers, Block Storage Volumes, and DNS records through your Kubernetes manifests.

      The k8s-alpha CLI will create two kinds of nodes on your account:

      • Master nodes will run the components of your Kubernetes control plane, and will also run etcd.

      • Worker nodes will run your workloads.

      These nodes will all exist as billable services on your account. You can specify how many master and worker nodes are created and also your nodes’ Linode plan and the data center they are located in.

      Alternatives for Creating Clusters

      Another easy way to create clusters is with Rancher. Rancher is a web application that provides a GUI interface for cluster creation and for management of clusters. Rancher also provides easy interfaces for deploying and scaling apps on your clusters, and it has a built-in catalog of curated apps to choose from.

      To get started with Rancher, review our How to Deploy Kubernetes on Linode with Rancher 2.2 guide. Rancher is capable of importing clusters that were created outside of it, so you can still use it even if you create your clusters through the k8s-alpha CLI or some other means.

      Beginners Resources

      If you haven’t used Kubernetes before, we recommend reading through our introductory guides on the subject:

      Before You Begin

      1. You will need to have a personal access token for Linode’s API. If you don’t have one already, follow the Get an Access Token section of our API guide and create a token with read/write permissions.

      2. If you do not already have a public-private SSH key pair, you will need to generate one. Follow the Generate a Key Pair section of our Public Key Authentication guide for instructions.

        Note

        If you’re unfamiliar with the concept of public-private key pairs, the introduction to our Public Key Authentication guide explains what they are.

      Install the k8s-alpha CLI

      The k8s-alpha CLI is bundled with the Linode CLI, and using it requires the installation and configuration of a few dependencies:

      • Terraform: The k8s-alpha CLI creates clusters by defining a resource plan in Terraform and then having Terraform create those resources. If you’re interested in how Terraform works, you can review our Beginner’s Guide to Terraform, but doing so is not required to use the k8s-alpha CLI.

      • kubectl: kubectl is the client software for Kubernetes, and it is used to interact with your Kubernetes cluster’s API.

      • SSH agent: Terraform will rely on public-key authentication to connect to the Linodes that it creates, and you will need to configure your SSH agent on your computer with the keys that Terraform should use.

      Install the Linode CLI

      Follow the Install the CLI section of our CLI guide to install the Linode CLI. If you already have the CLI, upgrade it to the latest version available:

      pip install --upgrade linode-cli
      

      Install Terraform

      Follow the instructions in the Install Terraform section of our Use Terraform to Provision Linode Environments guide.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest Kubernetes release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Configure your SSH Agent

      Your SSH key pair is stored in your home directory (or another location), but the k8s-alpha CLI’s Terraform implementation will not be able to reference your keys without first communicating your keys to Terraform. To communicate your keys to Terraform, you’ll first start the ssh-agent process. ssh-agent will cache your private keys for other processes, including keys that are passphrase-protected.

      Linux: Run the following command; if you stored your private key in another location, update the path that’s passed to ssh-add accordingly:

      eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
      

      Note

      You will need to run all of your k8s-alpha CLI commands from the terminal that you start the ssh-agent process in. If you start a new terminal, you will need to run the commands in this step again before using the k8s-alpha CLI.

      macOS: macOS has an ssh-agent process that persists across all of your terminal sessions, and it can store your private key passphrases in the operating system’s Keychain Access service.

      1. Update your ~/.ssh/config SSH configuration file. This configuration will add keys to the persistent agent and store passphrases in the OS keychain:

        ~/.ssh/config
        1
        2
        3
        4
        
        Host *
          AddKeysToAgent yes
          UseKeychain yes
          IdentityFile ~/.ssh/id_rsa

      Note

      Although kubectl should be used in all cases possible to interact with nodes in your cluster, the key pair cached in the ssh-agent process will enable you to access individual nodes via SSH as the core user.

      1. Add your key to the ssh-agent process:

        ssh-add -K ~/.ssh/id_rsa
        

      Create a Cluster

      1. To create your first cluster, run:

        linode-cli k8s-alpha create example-cluster
        
      2. Your terminal will show output related to the Terraform plan for your cluster. The output will halt with the following messages and prompt:

        Plan: 5 to add, 0 to change, 0 to destroy.
        
        Do you want to perform these actions in workspace "example-cluster"?
          Terraform will perform the actions described above.
          Only 'yes' will be accepted to approve.
        
          Enter a value:
        

        Note

        Your Terraform configurations will be stored under ~/.k8s-alpha-linode/

      3. Enter yes at the Enter a value: prompt. The Terraform plan will be applied over the next few minutes.

        Note

        You may see an error like the following:

          
        Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.
        
        

        If this appears, then you have run into a limit on the number of resources allowed on your Linode account. If this is the case, or if your nodes do not appear in the Linode Cloud Manager as expected, contact Linode Support. This limit also applies to Block Storage Volumes and NodeBalancers, which some of your cluster app deployments may try to create.

      4. When the operation finishes, you will see options like the following:

        Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
        Switched to context "example-cluster-4-kacDTg9RmZK@example-cluster-4".
        Your cluster has been created and your kubectl context updated.
        
        Try the following command:
        kubectl get pods --all-namespaces
        
        Come hang out with us in #linode on the Kubernetes Slack! http://slack.k8s.io/
        
      5. If you visit the Linode Cloud Manager, you will see your newly created cluster nodes on the Linodes page. By default, your Linodes will be created under the region and Linode plan that you have set as the default for your Linode CLI. To set new defaults for your Linode CLI, run:

        linode-cli configure
        

        The k8s-alpha CLI will conform to your CLI defaults, with the following exceptions:

        • If you set a default plan size smaller than Linode 4GB, the k8s-alpha CLI will create your master node(s) on the Linode 4GB plan, which is the minimum recommended for master nodes. It will still create your worker nodes using your default plan.

        • The k8s-alpha CLI will always create nodes running CoreOS (instead of the default distribution that you set).

      6. The k8s-alpha CLI will also update your kubectl client’s configuration (the kubeconfig file) to allow immediate access to the cluster. Review the Manage your Clusters with kubectl section for further instructions.

      Cluster Creation Options

      The following optional arguments are available:

      linode-cli k8s-alpha create example-cluster-2 --node-type g6-standard-1 --nodes 6 --master-type g6-standard-4 --region us-east --ssh-public-key $HOME/.ssh/id_rsa.pub
      
      Argument                                           Description
      --node-type TYPE The Linode Type ID for cluster worker nodes (which you can retrieve by running linode-cli linodes types).
      --nodes COUNT The number of Linodes to deploy as Nodes in the cluster (default 3).
      --master-type TYPE The Linode Type ID for cluster master nodes (which you can retrieve by running linode-cli linodes types).
      --region REGION The Linode Region ID in which to deploy the cluster (which you can retrieve by running linode-cli regions list).
      --ssh-public-key KEYPATH The path to your public key file which will be used to access Nodes during initial provisioning only! The keypair must be added to an ssh-agent (default $HOME/.ssh/id_rsa.pub).

      Delete a Cluster

      1. To delete a cluster, run the delete command with the name of your cluster:

        linode-cli k8s-alpha delete example-cluster
        
      2. Your terminal will show output from Terraform that describes the deletion operation. The output will halt with the following messages and prompt:

        Plan: 0 to add, 0 to change, 5 to destroy.
        
        Do you really want to destroy all resources in workspace "example-cluster"?
          Terraform will destroy all your managed infrastructure, as shown above.
          There is no undo. Only 'yes' will be accepted to confirm.
        
          Enter a value:
        
      3. Enter yes at the Enter a value: prompt. The nodes in your cluster will be deleted over the next few minutes.

      4. You should also login to the Linode Cloud Manager and confirm that any Volumes and NodeBalancers created by any of your cluster app deployments.

      5. Deleting the cluster will not remove the kubectl client configuration that the CLI inserted into your kubeconfig file. Review the Remove a Cluster’s Context section if you’d like to remove this information.

      Manage your Clusters with kubectl

      The k8s-alpha CLI will automatically configure your kubectl client to connect to your cluster. Specifically, this connection information is stored in your kubeconfig file. The path for this file is normally ~/.kube/config.

      Use the kubectl client to interact with your cluster’s Kubernetes API. This will work in the same way as with any other cluster. For example, you can get all the pods in your cluster:

      kubectl get pods --all-namespaces
      

      Review the Kubernetes documentation for more information about how to use kubectl.

      Switch between Cluster Contexts

      If you have more than one cluster set up, you can switch your kubectl client between them. To list all of your cluster contexts:

      kubectl config get-contexts
      

      An asterisk will appear before the current context:

      CURRENT   NAME                                                      CLUSTER                 AUTHINFO                            NAMESPACE
      *         example-cluster-kat7BqBBgU8@example-cluster               example-cluster         example-cluster-kat7BqBBgU8
                example-cluster-2-kacDTg9RmZK@example-cluster-2           example-cluster-2       example-cluster-2-kacDTg9RmZK
      

      To switch to another context, use the use-context subcommand and pass the value under the NAME column:

      kubectl config use-context example-cluster-2-kacDTg9RmZK@example-cluster-2
      

      All kubectl commands that you issue will now apply to the cluster you chose.

      Remove a Cluster’s Context

      When you delete a cluster with the k8s-alpha CLI, its connection information will persist in your local kubeconfig file, and it will still appear when you run kubectl config get-contexts. To remove this connection data, run the following commands:

      kubectl config delete-cluster example-cluster
      kubectl config delete-context example-cluster-kat7BqBBgU8@example-cluster
      kubectl config unset users.example-cluster-kat7BqBBgU8
      
      • For the delete-cluster subcommand, supply the value that appears under the CLUSTER column in the output from get-contexts.

      • For the delete-context subcommand, supply the value that appears under the NAME column in the output from get-contexts.

      • For the unset subcommand, supply users.<AUTHINFO>, where <AUTHINFO> is the value that appears under the AUTHINFO column in the output from get-contexts.

      Next Steps

      Now that you have a cluster up and running, you’re ready to start deploying apps to it. Review our other Kubernetes guides for help with deploying software and managing your cluster:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link