One place for hosting & domains

      Started

      Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode


      Updated by Linode

      Contributed by

      Linode

      Linode offers several pathways for users to easily deploy a Kubernetes cluster. If you prefer the command line, you can create a Kubernetes cluster with one command using the Linode CLI’s k8s-alpha plugin, and Terraform. Or, if you prefer a full featured GUI, Linode’s Rancher integration enables you to deploy and manage Kubernetes clusters with a simple web interface. The Linode Kubernetes Engine, currently under development with an early access beta version on its way this summer, allows you to spin up a Kubernetes cluster with Linode handling the management and maintenance of your control plane. These are all great options for production ready deployments.

      Kubeadm is a cloud provider agnostic tool that automates many of the tasks required to get a cluster up and running. Users of kubeadm can run a few simple commands on individual servers to turn them into a Kubernetes cluster consisting of a master node and worker nodes. This guide will walk you through installing kubeadm and using it to deploy a Kubernetes cluster on Linode. While the kubeadm approach requires more manual steps than other Kubernetes cluster creation pathways offered by Linode, this solution will be covered as way to dive deeper into the various components that make up a Kubernetes cluster and the ways in which they interact with each other to provide a scalable and reliable container orchestration mechanism.

      Note

      This guide’s example instructions will result in the creation of three billable Linodes. Information on how to tear down the Linodes are provided at the end of the guide. Interacting with the Linodes via the command line will provide the most opportunity for learning, however, this guide is written so that users can also benefit by reading along.

      Before You Begin

      1. Deploy three Linodes running Ubuntu 18.04 with the following system requirements:

        • One Linode to use as the master Node with 4GB RAM and 2 CPU cores.
        • Two Linodes to use as the Worker Nodes each with 1GB RAM and 1 CPU core.
      2. Follow the Getting Started and the Securing Your Server guides for instructions on setting up your Linodes. The steps in this guide assume the use of a limited user account with sudo privileges.

      Note

      When following the Getting Started guide, make sure that each Linode is using a different hostname. Not following this guideline will leave you unable to join some or all nodes to the cluster in a later step.
      1. Disable swap memory on your Linodes. Kubernetes requires that you disable swap memory on any cluster nodes to prevent the Kubernetes scheduler (kube-scheduler) from ever sending a pod to a node that has run out of CPU/memory or reached its designated CPU/memory limit.

        sudo swapoff -a
        

        Verify that your swap has been disabled. You should expect to see a value of 0 returned.

        cat /proc/meminfo | grep 'SwapTotal'
        

        To learn more about managing compute resources for containers, see the official Kubernetes documentation.

      2. Read the Beginners Guide to Kubernetes to familiarize yourself with the major components and concepts of Kubernetes. The current guide assumes a working knowledge of common Kubernetes concepts and terminology.

      Build a Kubernetes Cluster

      Kubernetes Cluster Architecture

      A Kubernetes cluster consists of a master node and worker nodes. The master node hosts the control plane, which is the combination of all the components that provide it the ability to maintain the desired cluster state. This cluster state is defined by manifest files and the kubectl tool. While the control plane components can be run on any cluster node, it is a best practice to isolate the control plane on its own node and to run any application containers on a separate worker node. A cluster can have a single worker node or up to 5000. Each worker node must be able to maintain running containers in a pod and be able to communicate with the master node’s control plane.

      The table below provides a list of the Kubernetes tooling you will need to install on your master and worker nodes in order to meet the minimum requirements for a functioning Kubernetes cluster as described above.

      Tool Description Master Node Worker Nodes
      kubeadm This tool provides a simple way to create a Kubernetes cluster by automating the tasks required to get a cluster up and running. New Kubernetes users with access to a cloud hosting provider, like Linode, can use kubeadm to build out a playground cluster. kubeadm is also used as a foundation to create more mature Kubernetes deployment tooling. x x
      Container Runtime A container runtime is responsible for running the containers that make up a cluster’s pods. This guide will use Docker as the container runtime. x x
      kubelet kubelet ensures that all pod containers running on a node are healthy and meet the specifications for a pod’s desired behavior. x x
      kubectl A command line tool used to manage a Kubernetes cluster. x x
      Control Plane Series of services that form Kubernetes master structure that allow it to control the cluster. Kubeadm allows the control plane services to run as containers on the master node. The control plane will be created when you initialize kubeadm later in this guide. x

      Install the Container Runtime: Docker

      Docker is the software responsible for running the pod containers on each node. You can use other container runtime software with Kubernetes, such as Containerd and CRI-O. You will need to install Docker on all three Linodes.

      These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

      1. Remove any older installations of Docker that may be on your system:

        sudo apt remove docker docker-engine docker.io
        
      2. Make sure you have the necessary packages to allow the use of Docker’s repository:

        sudo apt install apt-transport-https ca-certificates curl software-properties-common
        
      3. Add Docker’s GPG key:

        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
        
      4. Verify the fingerprint of the GPG key:

        sudo apt-key fingerprint 0EBFCD88
        

        You should see output similar to the following:

          
        pub   4096R/0EBFCD88 2017-02-22
                Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
        uid                  Docker Release (CE deb) 
        sub   4096R/F273FCD8 2017-02-22
        
        
      5. Add the stable Docker repository:

        sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
        
      6. Update your package index and install Docker CE:

        sudo apt update
        sudo apt install docker-ce
        
      7. Add your limited Linux user account to the docker group. Replace $USER with your username:

        sudo usermod -aG docker $USER
        

        Note

        After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

      8. Check that the installation was successful by running the built-in “Hello World” program:

        sudo docker run hello-world
        
      9. Setup the Docker daemon to use systemd as the cgroup driver, instead of the default cgroupfs. This is a recommended step so that Kubelet and Docker are both using the same cgroup manager. This will make it easier for Kubernetes to know which resources are available on your cluster’s nodes.

        sudo bash -c 'cat > /etc/docker/daemon.json <<EOF
        {
          "exec-opts": ["native.cgroupdriver=systemd"],
          "log-driver": "json-file",
          "log-opts": {
            "max-size": "100m"
          },
          "storage-driver": "overlay2"
        }
        EOF'
        
      10. Create a systemd directory for Docker:

        sudo mkdir -p /etc/systemd/system/docker.service.d
        
      11. Restart Docker:

        sudo systemctl daemon-reload
        sudo systemctl restart docker
        

      Install kubeadm, kubelet, and kubectl

      Complete the steps outlined in this section on all three Linodes.

      1. Update the system and install the required dependencies for installation:

        sudo apt-get update && sudo apt-get install -y apt-transport-https curl
        
      2. Add the required GPG key to your apt-sources keyring to authenticate the Kubernetes related packages you will install:

        curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
        
      3. Add Kubernetes to the package manager’s list of sources:

        sudo bash -c "cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
        deb https://apt.kubernetes.io/ kubernetes-xenial main
        EOF"
        
      4. Update apt, install Kubeadm, Kubelet, and Kubectl, and hold the installed packages at their installed versions:

        sudo apt-get update
        sudo apt-get install -y kubelet kubeadm kubectl
        sudo apt-mark hold kubelet kubeadm kubectl
        
      5. Verify that kubeadm, kubelet, and kubectl have installed by retrieving their version information. Each command should return version information about each package.

        kubeadm version
        kubelet --version
        kubectl version
        

      Set up the Kubernetes Control Plane

      After installing the Kubernetes related tooling on all your Linodes, you are ready to set up the Kubernetes control plane on the master node. The control plane is responsible for allocating resources to your cluster, maintaining the health of your cluster, and ensuring that it meets the minimum requirements you designate for the cluster.

      The primary components of the control plane are the kube-apiserver, kube-controller-manager, kube-scheduler, and etcd. kubeadm provides a way to easily initialize the Kubernetes master node with all the necessary control plane components. For more information on each of control plane component see the Beginner’s Guide to Kubernetes.

      In addition to the baseline control plane components, there are several addons, that can be installed on the master node to access additional cluster features. You will need to install a networking and network policy provider add on that will implement Kubernetes’ network model on the cluster’s pod network.

      This guide will use Calico as the pod network add on. Calico is a secure and open source L3 networking and network policy provider for containers. There are several other network and network policy providers to choose from. To view a full list of providers, refer to the official Kubernetes documentation.

      Note

      kubeadm only supports Container Network Interface (CNI) based networks. CNI consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers

      1. Initialize kubeadm on the master node. This command will run checks against the node to ensure it contains all required Kubernetes dependencies, if the checks pass, it will then install the control plane components.

        When issuing this command, it is necessary to set the pod network range that Calico will use to allow your pods to communicate with each other. It is recommended to use the private IP address space, 10.2.0.0/16.

        Note

        The pod network IP range should not overlap with the service IP network range. The default service IP address range is 10.96.0.0/12. You can provide an alternative service ip address range using the --service-cidr=10.97.0.0/12 option when initializing kubeadm. Replace 10.97.0.0/12 with the desired service IP range.

        For a full list of available kubeadm initialization options, see the official Kubernetes documentation.

        sudo kubeadm init --pod-network-cidr=10.2.0.0/16
        

        You should see a similar output:

          
        Your Kubernetes control-plane has initialized successfully!
        
        To start using your cluster, you need to run the following as a regular user:
        
          mkdir -p $HOME/.kube
          sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
        You should now deploy a pod network to the cluster.
        Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
          https://kubernetes.io/docs/concepts/cluster-administration/addons/
        
        Then you can join any number of worker nodes by running the following on each as root:
        
        kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
            --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
              
        

        The kubeadm join command will be used in the Join a Worker Node to the Cluster section of this guide to bootstrap the worker nodes to the Kubernetes cluster. This command should be kept handy for later use. Below is a description of the required options you will need to pass in with the kubeadm join command:

        • The master node’s IP address and the Kubernetes API server’s port number. In the example output, this is 192.0.2.0:6443. The Kubernetes API server’s port number is 6443 by default on all Kubernetes installations.
        • A bootstrap token. The bootstrap token has a 24-hour TTL (time to live). A new bootstrap token can be generated if your current token expires.
        • A CA key hash. This is used to verify the authenticity of the data retrieved from the Kubernetes API server during the bootstrap process.
      2. Copy the admin.conf configuration file to your limited user account. This file allows you to communicate with your cluster via kubectl and provides superuser privileges over the cluster. It contains a description of the cluster, users, and contexts. Copying the admin.conf to your limited user account will provide you with administrative privileges over your cluster.

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
        
      3. Install the necessary Calico manifests to your master node and apply them using kubectl. The first file, rbac-kdd.yaml, works with Kubernetes’ role-based access control (RBAC) to provide Calico components access to necessary parts of the Kubernetes API. The second file, calico.yaml, configures a self-hosted Calico installation that uses the Kubernetes API directly as the datastore (instead of etcd).

        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
        kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
        

      Inspect the Master Node with Kubectl

      After completing the previous section, your Kubernetes master node is ready with all the necessary components to manage a cluster. To gain a better understanding of all the parts that make up the master’s control plane, this section will walk you through inspecting your master node. If you have not yet reviewed the Beginner’s Guide to Kubernetes, it will be helpful to do so prior to continuing with this section as it relies on the understanding of basic Kubernetes concepts.

      1. View the current state of all nodes in your cluster. At this stage, the only node you should expect to see is the master node, since worker nodes have yet to be bootstrapped. A STATUS of Ready indicates that the master node contains all necessary components, including the pod network add-on, to start managing clusters.

        kubectl get nodes
        

        Your output should resemble the following:

          
        NAME        STATUS     ROLES     AGE   VERSION
        kube-master   Ready     master      1h    v1.14.1
            
        
      2. Inspect the available namespaces in your cluster.

        kubectl get namespaces
        

        Your output should resemble the following:

          
        NAME              STATUS   AGE
        default           Active   23h
        kube-node-lease   Active   23h
        kube-public       Active   23h
        kube-system       Active   23h
            
        

        Below is an overview of each namespace installed by default on the master node by kubeadm:

        • default: The default namespace contains objects with no other assigned namespace. By default, a Kubernetes cluster will instantiate a default namespace when provisioning the cluster to hold the default set of Pods, Services, and Deployments used by the cluster.
        • kube-system: The namespace for objects created by the Kubernetes system. This includes all resources used by the master node.
        • kube-public: This namespace is created automatically and is readable by all users. It contains information, like certificate authority data (CA), that helps kubeadm join and authenticate worker nodes.
        • kube-node-lease: The kube-node-lease namespace contains lease objects that are used by kubelet to determine node health. kubelet creates and periodically renews a Lease on a node. The node lifecycle controller treats this lease as a health signal. kube-node-lease was released to beta in Kubernetes 1.14.
      3. View all resources available in the kube-system namespace. The kube-system namespace contains the widest range of resources, since it houses all control plane resources. Replace kube-system with another namespace to view its corresponding resources.

        kubectl get all -n kube-system
        

      Join a Worker Node to the Cluster

      Now that your Kubernetes master node is set up, you can join worker nodes to your cluster. In order for a worker node to join a cluster, it must trust the cluster’s control plane, and the control plane must trust the worker node. This trust is managed via a shared bootstrap token and a certificate authority (CA) key hash. kubeadm handles the exchange between the control plane and the worker node. At a high-level the worker node bootstrap process is the following:

      1. kubeadm retrieves information about the cluster from the Kubernetes API server. The bootstrap token and CA key hash are used to ensure the information originates from a trusted source.

      2. kubelet can take over and begin the bootstrap process, since it has the necessary cluster information retrieved in the previous step. The bootstrap token is used to gain access to the Kubernetes API server and submit a certificate signing request (CSR), which is then signed by the control plane.

      3. The worker node’s kubelet is now able to connect to the Kubernetes API server using the node’s established identity.

      Before continuing, you will need to make sure that you know your Kubernetes API server’s IP address, that you have a bootstrap token, and a CA key hash. This information was provided when kubeadm was initialized on the master node in the Set up the Kubernetes Control Plane section of this guide. If you no longer have this information, you can regenerate the necessary information from the master node.


      Regenerate a Bootstrap Token

      These commands should be issued from your master node.

      1. Generate a new bootstrap token and display the kubeadm join command with the necessary options to join a worker node to the master node’s control plane:

        kubeadm token create --print-join-command
        

      Follow the steps below on each node you would like to bootstrap to the cluster as a worker node.

      1. SSH into the Linode that will be used as a worker node in the Kubernetes cluster.

        ssh username@192.0.2.1
        
      2. Join the node to your cluster using kubeadm. Ensure you replace 192.0.2.0:6443 with the IP address for your master node along with its Kubernetes API server’s port number, udb8fn.nih6n1f1aijmbnx5 with your bootstrap token, and sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26 with your CA key hash. The bootstrap process will take a few moments.

        sudo kubeadm join 192.0.2.0:6443 --token udb8fn.nih6n1f1aijmbnx5 
        --discovery-token-ca-cert-hash sha256:b7c01e83d63808a4a14d2813d28c127d3a1c4e1b6fc6ba605fe4d2789d654f26
        

        When the bootstrap process has completed, you should see a similar output:

          
          This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
        
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
              
        
      3. Repeat the steps outlined above on the second worker node to bootstrap it to the cluster.

      4. SSH into the master node and verify the worker nodes have joined the cluster:

         kubectl get nodes
        

        You should see a similar output.

          
        NAME          STATUS   ROLES    AGE     VERSION
        kube-master   Ready    master   1d22h   v1.14.1
        kube-node-1   Ready       1d22h   v1.14.1
        kube-node-2   Ready       1d22h   v1.14.1
              
        

      Next Steps

      Now that you have a Kubernetes cluster up and running, you can begin experimenting with the various ways to configure pods, group resources, and deploy services that are exposed to the public internet. To help you get started with this, move on to follow along with the Deploy a Static Site on Linode using Kubernetes guide.

      Tear Down Your Cluster

      If you are done experimenting with your Kubernetes Cluster, be sure to remove the Linodes you have running in order to avoid being further billed for them. See the Removing Services section of the Billing and Payments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with Pulumi


      Updated by Linode Written by Linode

      What is Pulumi?

      Pulumi is a development tool that allows you to write computer programs which deploy cloud resources–a practice referred to as infrastructure as code (IaC). Pulumi integrates with multiple cloud platforms, and Pulumi programs can be authored in a number of common programming languages.

      With Pulumi’s Linode integration, you can manage your Linode resources as you would with our API or CLI, but in a language you may already be familiar with. This guide will present examples written in JavaScript, but Pulumi is also compatible with Go, Python, and TypeScript.

      Pulumi also comes with a CLI interface for running the cloud infrastructure programs that you write. Once you’ve written a program, you can create your cloud resources with a single command:

      pulumi up
      

      In this guide you will learn how to:

      Before You Begin

      1. If you haven’t yet, create a Linode API token.

      2. Create a free Pulumi account.

      3. Create a new Debian 9 Linode. Follow our Getting Started guide to deploy the Linode, and then follow the Securing Your Server guide. Be sure to create a limited Linux user with sudo privileges on your server. All commands in this guide are to be run from a sudo user.

      4. Install Pulumi on your Linode using their installation script:

        curl -fsSL https://get.pulumi.com | sh
        
      5. To start using the Pulumi CLI:

        • Restart your shell session, or

        • Add /home/username/.pulumi/bin to your $PATH variable in your current session. Replace username with the name of your limited Linux user:

          PATH=$PATH:/home/username/.pulumi/bin
          
      6. Install Node.js and npm:

        sudo apt-get install curl software-properties-common
        curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
        sudo apt-get install -y nodejs
        

      Generate a Pulumi Access Token

      Once you have a Pulumi account, you will need to create an access token to use later.

      Why do I need a Pulumi access token?

      When Pulumi interprets the infrastructure programs that you write, it determines what cloud resources it needs to create in order to satisfy your program. Every time you run your program, Pulumi stores the state of these resources in a persistent backend. In subsequent updates to your infrastructure, Pulumi will compare your program with the recorded state so that it can determine which changes need to be made.

      By default, Pulumi securely stores this state information on a web backend hosted at https://app.pulumi.com. This service is free to start and offers paid tiers for teams and enterprises.

      It is possible to opt-out of using the default web backend and use a filesystem-based backend instead. Review Pulumi’s documentation for instructions.

      1. Log into your Pulumi account. After you’ve logged in, click on the avatar graphic to the top right of the Pulumi dashboard, then click on the Settings option in the dropdown menu that appears:

        Location of Pulumi Settings option

      2. Select the Access Tokens item in the sidebar to the left of the page that appears:

        Location of Pulumi Access Token page

      3. Click on the New Access Token button towards the top right of the following page and follow the prompts to create your new token. Make sure you save this in a secure location, similar to your Linode API token.

      Create a Linode

      Set up your Pulumi Project

      Now that you have everything you need to begin using Pulumi, you can create a new Pulumi project.

      Note

      A Pulumi project is the folder structure which contains your Pulumi programs. Specifically, a project is any folder which contains a Pulumi.yaml metadata file.
      1. Pulumi requires an empty directory for each new project, so first you’ll need to create one and make it your working directory:

        cd ~/ && mkdir pulumi && cd pulumi
        
      2. Now that you’re inside of your new empty working directory, create a new project:

        pulumi new
        
      3. From here, you’ll see several prompts:

        • Enter your Pulumi access token if prompted. If you’ve already entered it at any point following the installation of Pulumi, you will not be prompted again and can skip this step.
        • Use your arrow keys to highlight the linode-javascript option.
        • Enter a project name of your choice, or leave blank to use the default option.

        • Enter a project description, or leave blank to use the default option.

        • Enter a stack name of your choice, or leave blank to use the default option.

          What’s a stack?

          Multiple instances of your Pulumi programs can be created. For example, you may want to have separate instances for the development, staging, and production environments of your service. Or, you may create multiple instances of your service if you’re offering it to different business clients. In Pulumi, these instances are referred to as stacks.
        • Enter your Linode API token.

      4. Once the installation is successful, you will see a Your new project is ready to go! message. The pulumi new command scaffolds a collection of default configuration files in your project’s directory. The default configuration will give you everything you need to get started. Enter the ls command to ensure that the files are present:

        ls
        
          
        index.js      package.json	 Pulumi.pulumi.yaml
        node_modules  package-lock.json  Pulumi.yaml
        
        

        The contents of these files were defined according to our responses to each prompt after entering pulumi new. In particular:

        • index.js contains the JavaScript Pulumi will run
        • package.json defines the dependencies we can use and the file path Pulumi will be reading our code from.

      Inspect the Default Configuration

      Let’s take a look at the contents of our index.js file:

      index.js
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      "use strict";
      const pulumi = require("@pulumi/pulumi");
      const linode = require("@pulumi/linode");
      
      // Create a Linode resource (Linode Instance)
      const instance = new linode.Instance("my-instance", {
          type: "g6-nanode-1",
          region: "us-east",
          image: "linode/ubuntu18.04",
      });
      
      // Export the Instance label of the instance
      exports.instanceLabel = instance.label;
      

      The file requires two JavaScript modules unique to Pulumi: Pulumi’s SDK, and Pulumi’s Linode integration. Pulumi’s API Reference Documentation serves as a reference for the JavaScript you’ll see here. It also includes a library of several additional options that enable you to create configurations more specific to your use case.

      In this case, your file is only creating a single Nanode instance in the Newark data center running Ubuntu 18.04.

      Create and Destroy Resources

      • Use Pulumi’s preview command to test your code and make sure it’s successfully able to create resources under your account.

        pulumi preview
        

        The output of the command will list the operations Pulumi will perform once you deploy your program:

        Previewing update (dev):
        
            Type                      Name                   Plan
        +   pulumi:pulumi:Stack       my-pulumi-project-dev  create
        +   └─ linode:index:Instance  my-instance            create
        
        Resources:
            + 2 to create
        
      • Use Pulumi’s up command to deploy your code to your Linode account:

        pulumi up
        

        Note

        This will create a new billable resource on your account.

        From here, you will be prompted to confirm the resource creation. Use your arrow keys to choose the yes option, hit enter, and you will see your resources being created. Once the process is completed, the Linode Label of your new Linode will be displayed. If you check your account manually through the Cloud Manager, you can confirm that this Linode has been successfully created.

      • Since this Linode was only created as a test, you can safely delete it by entering Pulumi’s destroy command:

        pulumi destroy
        

        Follow the prompts, and you’ll be able to see the resources being removed, similar to how we could see them being created.

        Note

        Many Pulumi commands will be logged on your Pulumi account. You can see this under the Activity tab of your project’s stack in Pulumi’s Application Page.

      Create and Configure a NodeBalancer

      To better demonstrate the power of Pulumi code, we’ll create a new index.js file. This will define everything we need to create a functioning NodeBalancer which is pre-configured with two backend Linodes running NGINX.

      1. Replace the contents of your index.js file with the following:

        index.js
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        49
        50
        51
        52
        53
        54
        55
        56
        57
        58
        59
        60
        61
        62
        63
        64
        65
        66
        67
        68
        69
        70
        71
        72
        73
        74
        75
        
        const pulumi = require("@pulumi/pulumi");
        const linode = require("@pulumi/linode");
        
        // Create two new Nanodes using a StackScript to configure them internally.
        // The StackScript referenced will install and enable NGINX.
        
        // "linode1" (the first argument passed to the Linode instance constructor function) is the Pulumi-allocated Unique Resource Name (URN) for this resource
        const linode1 = new linode.Instance("linode1", {
                // "PulumiNode1" is the Linode's label that appears in the Cloud Manager. Linode labels must be unique on your Linode account
                label: "PulumiNode1",
                region: "us-east",
                image: "linode/debian9",
                privateIp: true,
                stackscriptData: {
                        hostname: "PulumiNode1",
                    },
                stackscriptId: 526246,
                type:"g6-nanode-1",
        });
        
        const linode2 = new linode.Instance("linode2", {
                label: "PulumiNode2",
                region: "us-east",
                image: "linode/debian9",
                privateIp: true,
                stackscriptData: {
                    hostname: "PulumiNode2",
                    },
                stackscriptId: 526246,
                type:"g6-nanode-1",
        });
        
        // Create and configure your NodeBalancer
        
        const nodeBalancer = new linode.NodeBalancer("nodeBalancer", {
                clientConnThrottle: 20,
                label: "PulumiNodeBalancer",
                region: "us-east",
        });
        
        const nodeBalancerConfig = new linode.NodeBalancerConfig("nodeBalancerConfig", {
                algorithm: "source",
                check: "http",
                checkAttempts: 3,
                checkTimeout: 30,
                checkInterval: 40,
                checkPath: "/",
                nodebalancerId: nodeBalancer.id,
                port: 8088,
                protocol: "http",
                stickiness: "http_cookie",
        });
        
        // Assign your Linodes to the NodeBalancer
        
        const balancerNode1 = new linode.NodeBalancerNode("balancerNode1", {
                address: pulumi.concat(linode1.privateIpAddress, ":80"),
                configId: nodeBalancerConfig.id,
                label: "PulumiBalancerNode1",
                nodebalancerId: nodeBalancer.id,
                weight: 50,
        });
        
        const balancerNode2 = new linode.NodeBalancerNode("balancerNode2", {
                address: pulumi.concat(linode2.privateIpAddress, ":80"),
                configId: nodeBalancerConfig.id,
                label: "PulumiBalancerNode2",
                nodebalancerId: nodeBalancer.id,
                weight: 50,
        });
        
        //Output your NodeBalancer's Public IPV4 address and the port we configured to access it
        exports.nodeBalancerIP = nodeBalancer.ipv4;
        exports.nodeBalancerPort = nodeBalancerConfig.port;
        

        Note

        In our index.js file we’ve created and configured two Linodes using an existing StackScript which installs NGINX. Pulumi’s Linode integration allows for the creation of entirely new StackScripts directly in code, which can help you to automate your deployments even further.

        If you’re interested in seeing how this StackScript works, you can view it here.

      2. Now that you’ve successfully prepared your JavaScript code, let’s bring up our configuration:

        pulumi up
        

        As before, select yes when prompted and wait for a few moments as your resources are created, configured, and brought online.

      3. Once the process is completed, you’ll see your NodeBalancer’s IP address and the port you configured earlier displayed as part of the output:

        Outputs:
        + nodeBalancerIP  : "192.0.2.3"
        + nodeBalancerPort: 8088
        

        Enter this IP address and port into your web browser, and you will see the Hello World-style page that the StackScript configured:

        curl http://192.0.2.3:8088/
        
          
        Hello from PulumiNode1
        
        

        Note

        If you do not see this page right away, you should wait a few additional moments. NodeBalancers can sometimes require a little extra time to fully apply a new configuration.

      4. Once you’re finished with your NodeBalancer, you can remove and delete everything you added by entering pulumi destroy as before.

      Next Steps

      Pulumi is a powerful tool with a vast number of possible configurations that can be applied. From here you can:

      • Look at Pulumi’s examples for more ideas regarding the things you can do with Pulumi.

      • Try using Pulumi with different languages like Python or TypeScript

      • Import Node.js tools like Express for even more elasticity with your code.

      • Use Pulumi for Serverless Computing

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Getting Started with Linode GPU Instances


      Updated by Linode

      Written by Linode

      This guide will help you get your Linode GPU Instance up and running on a number of popular distributions. To prepare your Linode, you will need to install NVIDIA’s proprietary drivers using NVIDIA’s CUDA Toolkit.

      When using distributions that are not fully supported by CUDA, like Debian 9, you can install the NVIDIA driver without the CUDA toolkit. To only install the NVIDIA driver, complete the Before You Begin section and then move on to the Manual Install section of this guide.

      For details on the CUDA Toolkit’s full feature set, see the official documentation.


      Why do NVIDIA’s drivers need to be installed?

      Linode has chosen not to bundle NVIDIA’s proprietary closed-source drivers with its standard Linux distribution images. While some operating systems are packaged with the open source Nouveau driver, the NVIDIA proprietary driver will provide optimal performance for your GPU-accelerated applications.

      Before You Begin

      1. Follow our Getting Started and Securing Your Server guides for instructions on setting up your Linodes.

      2. Make sure that your GPU is currently available on your deployed Linode:

        lspci -vnn | grep NVIDIA
        

        You should see a similar output confirming that your Linode is currently running a NVIDIA GPU. The example output was generated on Ubuntu 18.04. Your output may vary depending on your distribution.

          
        00:03.0 VGA compatible controller [0300]: NVIDIA Corporation TU102GL [Quadro RTX 6000/8000] [10de:1e30] (rev a1) (prog-if 00 [VGA controller])
            Subsystem: NVIDIA Corporation Quadro RTX 6000 [10de:12ba]
        
        

        Note

        Depending on your distribution, you may need to install lspci manually first. On current CentOS and Fedora systems, you can install this utility with the following command:

        sudo yum install pciutils
        
      3. Move on to the next section to install the dependencies that NVIDIA’s drivers rely on.

      Install NVIDIA Driver Dependencies

      Prior to installing the driver, you should install the required dependencies. Listed below are commands for installing these packages on many popular distributions.

      1. Find your Linode’s distribution from the list below and install the NVIDIA driver’s dependencies:

        Ubuntu 18.04

        sudo apt-get install build-essential
        

        Debian 9

        sudo apt-get install build-essential
        sudo apt-get install linux-headers-`uname -r`
        

        CentOS 7

        sudo yum install kernel-devel-$(uname -r) kernel-headers-$(uname -r)
        sudo yum install wget
        sudo yum -y install gcc
        

        OpenSUSE

        zypper install gcc
        zypper install kernel-source
        
      2. After installing the dependencies, reboot your Linode from the Cloud Manager. Rebooting will ensure that any newly installed kernel headers are available for use.

      NVIDIA Driver Installation

      After installing the required dependencies for your Linux distribution, you are ready to install the NVIDIA driver. If you are using Ubuntu 18.04, CentOS 7, and OpenSUSE, proceed to the Install with CUDA section. If you are using Debian 9, proceed to the Install Manually section.

      Install with CUDA

      In this section, you will install your GPU driver using NVIDIA’s CUDA Toolkit.
      For a full list of native Linux distribution support in CUDA, see the CUDA toolkit documentation.

      1. Visit the CUDA Downloads Page and navigate to the Select Target Platform section.

      2. Provide information about your target platform by following the prompts and selecting the appropriate options. Once complete, you will gain access to the correct download link for the CUDA Toolkit installer. Use the table below for guidance on how to respond to each prompt:

        Prompt Selection
        Operating System Linux
        Architecture x86_64
        Distribution Your Linode’s distribution
        Version Your distribution’s version
        Installer type runfile (local)

        A completed set of selections will resemble the example:

        CUDA Downloads Page - Select Target Platform

      3. A Download Installer section will appear below the Select Target Platform section. The green Download button in this section will link to the installer file. Copy this link to your computer’s clipboard:

        Copy Download Link

      4. On your Linode, enter the wget command and paste in the download link you copied. This example shows the syntax for the command, but you should make sure to use the download link appropriate for your Linode:

        wget https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.168_418.67_linux.run
        
      5. After wget completes, run your version of the installer script to begin the installation process:

        sudo sh cuda_*_linux.run
        

        Note

        The installer will take a few moments to run before generating any output.

      6. Read and accept the License Agreement.

      7. Choose to install the CUDA Toolkit in its entirety or partially. To use your GPU, you only need to install the driver. Optionally, you can choose to install the full toolkit to gain access to a set of tools that will empower you to create GPU-accelerated applications.

        To only install the driver, uncheck all options directly below the Driver option. This will result in your screen resembling the following:

        Cuda Installer

      8. Once you have checked your desired options, select Install to begin the installation. A full install will take several minutes to complete.

        Note

        Installation on CentOS and Fedora will fail following this step, because the installer requires a reboot to fully remove the default Nouveau driver. If you are running either of these operating systems, reboot the Linode, run the installer again, and your installation will be successful.

      9. When the installation has completed, run the nvidia-smi command to make sure that you’re currently using your NVIDIA GPU device with its associated driver:

        nvidia-smi
        

        You should see a similar output:

        +-----------------------------------------------------------------------------+
        | NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
        |-------------------------------+----------------------+----------------------+
        | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
        | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
        |===============================+======================+======================|
        |   0  Quadro RTX 6000     Off  | 00000000:00:03.0 Off |                  Off |
        | 34%   57C    P0    72W / 260W |      0MiB / 24190MiB |      0%      Default |
        +-------------------------------+----------------------+----------------------+
        
        +-----------------------------------------------------------------------------+
        | Processes:                                                       GPU Memory |
        |  GPU       PID   Type   Process name                             Usage      |
        |=============================================================================|
        |  No running processes found                                                 |
        +-----------------------------------------------------------------------------+
        

        In the output, you can see that the driver is installed and functioning correctly, the version of CUDA attributed to it, and other useful statistics.

      Install Manually

      This section will walk you through the process of downloading and installing the latest NVIDIA driver on Debian 9. This process can also be completed on another distribution of your choice, if needed:

      1. Visit NVIDIA’s Driver Downloads Page.

      2. Make sure that the options from the drop-down menus reflect the following values:

        Prompt Selection
        Product Type Quadro
        Product Series Quadro RTX Series
        Product Quadro RTX 8000
        Operating System Linux 64-bit
        Download Type Linux Long Lived Driver
        Language English (US)

        The form will look as follows when completed:

        NVIDIA Drivers Download Form

      3. Click the Search button, and a page will appear that shows information about the driver. Click the green Download button on this page. The file will not download to your computer; instead, you will be taken to another download confirmation page.

      4. Copy the link for the driver installer script from the green Download button on this page:

        Copy Download Link

      5. On your Linode, enter the wget command and paste in the download link you copied. This example shows the syntax for the command, but you should make sure to use the download link you copied from NVIDIA’s site:

        wget http://us.download.nvidia.com/XFree86/Linux-x86_64/430.26/NVIDIA-Linux-x86_64-430.26.run
        
      6. After wget completes, run your version of the installer script on your Linode. Follow the prompts as necessary:

        sudo bash NVIDIA-Linux-x86_64-*.run
        
      7. Select OK and Yes for all prompts as they appear.

      8. Once the installer has completed, use nvidia-smi to make sure that you’re currently using your NVIDIA GPU with its associated driver:

        nvidia-smi
        

        You should see a similar output:

        +-----------------------------------------------------------------------------+
        | NVIDIA-SMI 430.26       Driver Version: 430.26       CUDA Version: 10.2     |
        |-------------------------------+----------------------+----------------------+
        | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
        | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
        |===============================+======================+======================|
        |   0  Quadro RTX 6000     Off  | 00000000:00:03.0 Off |                  Off |
        | 34%   59C    P0     1W / 260W |      0MiB / 24220MiB |      6%      Default |
        +-------------------------------+----------------------+----------------------+
        
        +-----------------------------------------------------------------------------+
        | Processes:                                                       GPU Memory |
        |  GPU       PID   Type   Process name                             Usage      |
        |=============================================================================|
        |  No running processes found                                                 |
        +-----------------------------------------------------------------------------+
        

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link