One place for hosting & domains

      Linode

      How to use the Linode Ansible Module to Deploy Linodes


      Updated by Linode Contributed by Linode

      Ansible is a popular open-source tool that can be used to automate common IT tasks, like cloud provisioning and configuration management. With Ansible’s 2.8 release, you can deploy Linode instances using our latest API (v4). Ansible’s linode_v4 module adds the functionality needed to deploy and manage Linodes via the command line or in your Ansible Playbooks. While the dynamic inventory plugin for Linode helps you source your Ansible inventory directly from the Linode API (v4).

      In this guide you will learn how to:

      • Deploy and manage Linodes using Ansible and the linode_v4 module.
      • Create an Ansible inventory for your Linode infrastructure using the dynamic inventory plugin for Linode.

      Caution

      This guide’s example instructions will create a 1 GB Nanode billable resource on your Linode account. If you do not want to keep using the Nanode that you create, be sure to delete the resource when you have finished the guide.

      If you remove the resource afterward, you will only be billed for the hour(s) that the resources were present on your account.

      Before You Begin

      Note

      Configure Ansible

      The Ansible configuration file is used to adjust Ansible’s default system settings. Ansible will search for a configuration file in the directories listed below, in the order specified, and apply the first configuration values it finds:

      • ANSIBLE_CONFIG environment variable pointing to a configuration file location. If passed, it will override the default Ansible configuration file.
      • ansible.cfg file in the current directory
      • ~/.ansible.cfg in the home directory
      • /etc/ansible/ansible.cfg

      In this section, you will create an Ansible configuration file and add options to disable host key checking, and to whitelist the Linode inventory plugin. The Ansible configuration file will be located in a development directory that you create, however, it could exist in any of the locations listed above. See Ansible’s official documentation for a full list of available configuration settings.

      Caution

      When storing your Ansible configuration file, ensure that its corresponding directory does not have world-writable permissions. This could pose a security risk that allows malicious users to use Ansible to exploit your local system and remote infrastructure. At minimum, the directory should restrict access to particular users and groups. For example, you can create an ansible group, only add privileged users to the ansible group, and update the Ansible configuration file’s directory to have 764 permissions. See the Linux Users and Groups guide for more information on permissions.
      1. In your home directory, create a directory to hold all of your Ansible related files and move into the directory:

        mkdir development && cd development
        
      2. Create the Ansible configuration file, ansible.cfg in the development directory and add the host_key_checking and enable_plugins options.

        ~/development/ansible.cfg
        1
        2
        3
        4
        5
        6
        
        [defaults]
        host_key_checking = False
        VAULT_PASSWORD_FILE = ./vault-pass
        [inventory]
        enable_plugins = linode
              
        • host_key_checking = False will allow Ansible to SSH into hosts without having to accept the remote server’s host key. This will disable host key checking globally.
        • VAULT_PASSWORD_FILE = ./vault-pass is used to specify a Vault password file to use whenever Ansible Vault requires a password. Ansible Vault offers several options for password management. To learn more password management, read Ansible’s Providing Vault Passwords documentation.
        • enable_plugins = linode enables the Linode dynamic inventory plugin.

      Create a Linode Instance

      You can now begin creating Linode instances using Ansible. In this section, you will create an Ansible Playbook that can deploy Linodes.

      Create your Linode Playbook

      1. Ensure you are in the development directory that you created in the Configure Ansible section:

        cd ~/development
        
      2. Using your preferred text editor, create the Create Linode Playbook file and include the following values:

        ~/development/linode_create.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        - name: Create Linode
          hosts: localhost
          vars_files:
              - ./group_vars/example_group/vars
          tasks:
          - name: Create a new Linode.
            linode_v4:
              label: "{{ label }}{{ 100 |random }}"
              access_token: "{{ token }}"
              type: g6-nanode-1
              region: us-east
              image: linode/debian9
              root_pass: "{{ password }}"
              authorized_keys: "{{ ssh_keys }}"
              group: example_group
              tags: example_group
              state: present
            register: my_linode
            
        • The Playbook my_linode contains the Create Linode play, which will be executed on hosts: localhost. This means the Ansible playbook will execute on the local system and use it as a vehicle to deploy the remote Linode instances.
        • The vars_files key provides the location of a local file that contains variable values to populate in the play. The value of any variables defined in the vars file will substitute any Jinja template variables used in the Playbook. Jinja template variables are any variables between curly brackets, like: {{ my_var }}.
        • The Create a new Linode task calls the linode_v4 module and provides all required module parameters as arguments, plus additional arguments to configure the Linode’s deployment. For details on each parameter, see the linode_v4 Module Parameters section.

          Note

          Usage of groups is deprecated, but still supported by Linode’s API v4. The Linode dynamic inventory module requires groups to generate an Ansible inventory and will be used later in this guide.
        • Theregister keyword defines a variable name, my_linode that will store linode_v4 module return data. For instance, you could reference the my_linode variable later in your Playbook to complete other actions using data about your Linode. This keyword is not required to deploy a Linode instance, but represents a common way to declare and use variables in Ansible Playbooks. The task in the snippet below will use Ansible’s debug module and the my_linode variable to print out a message with the Linode instance’s ID and IPv4 address during Playbook execution.

          1
          2
          3
          4
          5
          
          ...
          - name: Print info about my Linode instance
              debug:
                msg: "ID is {{ my_linode.instance.id }} IP is {{ my_linode.instance.ipv4 }}"
                  

      Create the Variables File

      In the previous section, you created the Create Linode Playbook to deploy Linode instances and made use of Jinja template variables. In this section, you will create the variables file to provide values to those template variables.

      1. Create the directory to store your Playbook’s variable files. The directory is structured to group your variable files by inventory group. This directory structure supports the use of file level encryption that Ansible Vault can detect and parse. Although it is not relevant to this guide’s example, it will be used as a best practice.

        mkdir -p ~/development/group_vars/example_group/
        
      2. Create the variables file and populate it with the example variables. You can replace the values with your own.

        ~/development/group_vars/example_group/vars
        1
        2
        3
        4
        
        ssh_keys: >
                ['ssh-rsa AAAAB3N..5bYqyRaQ== user@mycomputer', '~/.ssh/id_rsa.pub']
        label: simple-linode-
            
        • The ssh_keys example passes a list of two public SSH keys. The first provides the string value of the key, while the second provides a local public key file location.

          Configure your SSH Agent

          If your SSH Keys are passphrase-protected, you should add the keys to your SSH agent so that Ansible does not hang when running Playbooks on the remote Linode. The following instructions are for Linux systems:

          1. Run the following command; if you stored your private key in another location, update the path that’s passed to ssh-add accordingly:

            eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
            

            If you start a new terminal, you will need to run the commands in this step again before having access to the keys stored in your SSH agent.

        • label provides a label prefix that will be concatenated with a random number. This occurs when the Create Linode Playbook’s Jinja templating for the label argument is parsed (label: "{{ label }}{{ 100 |random }}").

      Encrypt Sensitive Variables with Ansible Vault

      Ansible Vault allows you to encrypt sensitive data, like passwords or tokens, to keep them from being exposed in your Ansible Playbooks or Roles. You will take advantage of this functionality to keep your Linode instance’s password and access_token encrypted within the variables file.

      Note

      Ansible Vault can also encrypt entire files containing sensitive values. View Ansible’s documentation on Vault for more information.
      1. Create your Ansible Vault password file and add your password to the file. Remember the location of the password file was configured in the ansible.cfg file in the Configure Ansible section of this guide.

        ~/development/vault-pass
        1
        2
        
        My.ANS1BLEvault-c00lPassw0rd
            
      2. Encrypt the value of your Linode’s root user password using Ansible Vault. Replace My.c00lPassw0rd with your own strong password that conforms to the root_pass parameter’s constraints.

        ansible-vault encrypt_string 'My.c00lPassw0rd' --name 'password'
        

        You will see a similar output:

          
              password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        Encryption successful
      3. Copy the generated output and add it to your vars file.

      4. Encrypt the value of your access token. Replace the value of 86210...1e1c6bd with your own access token.

        ansible-vault encrypt_string '86210...1e1c6bd' --name 'token'
        
      5. Copy the generated output and append it to the bottom of your vars file.

        The final vars file should resemble the example below:

        ~/development/group_vars/example_group/vars
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        ssh_keys: >
                ['ssh-rsa AAAAB3N..5bYqyRaQ== user@mycomputer', '~/.ssh/id_rsa.pub']
        label: simple-linode-
        password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        token: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  65363565316233613963653465613661316134333164623962643834383632646439306566623061
                  3938393939373039373135663239633162336530373738300a316661373731623538306164363434
                  31656434356431353734666633656534343237333662613036653137396235353833313430626534
                  3330323437653835660a303865636365303532373864613632323930343265343665393432326231
                  61313635653463333630636631336539643430326662373137303166303739616262643338373834
                  34613532353031333731336339396233623533326130376431346462633832353432316163373833
                  35316333626530643736636332323161353139306533633961376432623161626132353933373661
                  36663135323664663130
            

      Run the Ansible Playbook

      You are now ready to run the Create Linode Playbook. When you run the Playbook, a 1 GB Nanode will be deployed in the Newark data center. Note: you want to run Ansible commands from the directory where your ansible.cfg file is located.

      1. Run your playbook to create your Linode instances.

        ansible-playbook ~/development/linode_create.yml
        

        You will see a similar output:

        PLAY [Create Linode] *********************************************************************
        
        TASK [Gathering Facts] *******************************************************************
        ok: [localhost]
        
        TASK [Create a new Linode.] **************************************************************
        changed: [localhost]
        
        PLAY RECAP *******************************************************************************
        localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
        

      linode_v4 Module Parameters

      Parameter Data type/Status Usage
      access_token string, required Your Linode API v4 access token. The token should have permission to read and write Linodes. The token can also be specified by exposing the LINODE_ACCESS_TOKEN environment variable.
      authorized_keys list A list of SSH public keys or SSH public key file locations on your local system, for example, ['averylongstring','~/.ssh/id_rsa.pub']. The public key will be stored in the /root/.ssh/authorized_keys file on your Linode. Ansible will use the public key to SSH into your Linodes as the root user and execute your Playbooks.
      group string, deprecated The Linode instance’s group. Please note, group labelling is deprecated but still supported. The encouraged method for marking instances is to use tags. This parameter must be provided to use the Linode dynamic inventory module.
      image string The Image ID to deploy the Linode disk from. Official Linode Images start with linode/, while your private images start with private/. For example, use linode/ubuntu18.04 to deploy a Linode instance with the Ubuntu 18.04 image. This is a required parameter only when creating Linode instances.

      To view a list of all available Linode images, issue the following command:

      curl https://api.linode.com/v4/images.

      label string, required The Linode instance label. The label is used by the module as the main determiner for idempotence and must be a unique value.

      Linode labels have the following constraints:

      • Must start with an alpha character.
      • May only consist of alphanumeric characters, dashes (-), underscores (_) or periods (.).
      • Cannot have two dashes (–), underscores (__) or periods (..) in a row.

      region string The region where the Linode will be located. This is a required parameter only when creating Linode instances.

      To view a list of all available regions, issue the following command:

      curl https://api.linode.com/v4/regions.

      root_pass string The password for the root user. If not specified, will be generated. This generated password will be available in the task success JSON.

      The root password must conform to the following constraints:

      • May only use alphanumerics, punctuation, spaces, and tabs.
      • Must contain at least two of the following characters classes: upper-case letters, lower-case letters, digits, punctuation.

      state string, required The desired instance state. The accepted values are absent and present.
      tags list The user-defined labels attached to Linodes. Tags are used for grouping Linodes in a way that is relevant to the user.
      type string, The Linode instance’s plan type. The plan type determines your Linode’s hardware resources and its pricing.

      To view a list of all available Linode types including pricing and specifications for each type, issue the following command:

      curl https://api.linode.com/v4/linode/types.

      The Linode Dynamic Inventory Plugin

      Ansible uses inventories to manage different hosts that make up your infrastructure. This allows you to execute tasks on specific parts of your infrastructure. By default, Ansible will look in /etc/ansible/hosts for an inventory, however, you can designate a different location for your inventory file and use multiple inventory files that represent your infrastructure. To support infrastructures that shift over time, Ansible offers the ability to track inventory from dynamic sources, like cloud providers. The Ansible dynamic inventory plugin for Linode can be used to source your inventory from Linode’s API v4. In this section, you will use the Linode plugin to source your Ansible deployed Linode inventory.

      Note

      The dynamic inventory plugin for Linode was enabled in the Ansible configuration file created in the Configure Ansible section of this guide.

      Configure the Plugin

      1. Configure the Ansible dynamic inventory plugin for Linode by creating a file named linode.yml.

        ~/development/linode.yml
        1
        2
        3
        4
        5
        6
        7
        
        plugin: linode
        regions:
          - us-east
        groups:
          - example_group
        types:
          - g6-nanode-1
        • The configuration file will create an inventory for any Linodes on your account that are in the us-east region, part of the example_group group and of type g6-nanode-1. Any Linodes that are not part of the example_group group, but that fulfill the us-east region and g6-nanode-type type will be displayed as ungrouped. All other Linodes will be excluded from the dynamic inventory. For more information on all supported parameters, see the Plugin Parameters section.

      Run the Inventory Plugin

      1. Export your Linode API v4 access token to the shell environment. LINODE_ACCESS_TOKEN must be used as the environment variable name. Replace mytoken with your own access token.

        export LINODE_ACCESS_TOKEN='mytoken'
        
      2. Run the Linode dynamic inventory plugin.

        ansible-inventory -i ~/development/linode.yml --graph
        

        You should see a similar output. The output may vary depending on the Linodes already deployed to your account and the parameter values you pass.

        @all:
        |--@example_group:
        |  |--simple-linode-29
        

        For a more detailed output including all Linode instance configurations, issue the following command:

        ansible-inventory -i ~/development/linode.yml --graph --vars
        
      3. Before you can communicate with your Linode instances using the dynamic inventory plugin, you will need to add your Linode’s IPv4 address and label to your /etc/hosts file.

        The Linode Dynamic Inventory Plugin assumes that the Linodes in your account have labels that correspond to hostnames that are in your resolver search path, /etc/hosts. This means you will have to create an entry in your /etc/hosts file to map the Linode’s IPv4 address to its hostname.

        Note

        A pull request currently exists to support using a public IP, private IP or hostname. This change will enable the inventory plugin to be used with infrastructure that does not have DNS hostnames or hostnames that match Linode labels.

        To add your deployed Linode instance to the /etc/hosts file:

        • Retrieve your Linode instance’s IPv4 address:

          ansible-inventory -i ~/development/linode.yml --graph --vars | grep 'ipv4|simple-linode'
          

          Your output will resemble the following:

          |  |--simple-linode-36
          |  |  |--{ipv4 = [u'192.0.2.0']}
          |  |  |--{label = simple-linode-36}
          
        • Open the /etc/hosts file and add your Linode’s IPv4 address and label:

          /etc/hosts
          1
          2
          3
          
          127.0.0.1       localhost
          192.0.2.0 simple-linode-29
                    
      4. Verify that you can communicate with your grouped inventory by pinging the Linodes. The ping command will use the dynamic inventory plugin configuration file to target example_group. The u root option will run the command as root on the Linode hosts.

        ansible -m ping example_group -i ~/development/linode.yml -u root
        

        You should see a similar output:

        simple-linode-29 | SUCCESS => {
            "ansible_facts": {
                "discovered_interpreter_python": "/usr/bin/python"
            },
            "changed": false,
            "ping": "pong"
        }
        

      Plugin Parameters

      Parameter Data type/Status Usage
      access_token string, required Your Linode API v4 access token. The token should have permission to read and write Linodes. The token can also be specified by exposing the LINODE_ACCESS_TOKEN environment variable.
      plugin string, required The plugin name. The value must always be linode in order to use the dynamic inventory plugin for Linode.
      regions list The Linode region with which to populate the inventory. For example, us-east is possible value for this parameter.

      To view a list of all available Linode images, issue the following command:

      curl https://api.linode.com/v4/images.

      types list The Linode type with which to populate the inventory. For example, g6-nanode-1 is a possible value for this parameter.

      To view a list of all available Linode types including pricing and specifications for each type, issue the following command:

      curl https://api.linode.com/v4/linode/types.

      groups list The Linode group with which to populate the inventory. Please note, group labelling is deprecated but still supported. The encouraged method for marking instances is to use tags. This parameter must be provided to use the Linode dynamic inventory module.

      Delete Your Resources

      1. To delete the Linode instance created in this guide, create a Delete Linode Playbook with the following content in the example. Replace the value of label with your Linode’s label:

        ~/development/linode_delete.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        - name: Delete Linode
          hosts: localhost
          vars_files:
            - ./group_vars/example_group/vars
          tasks:
          - name: Delete your Linode Instance.
            linode_v4:
              label: simple-linode-29
              state: absent
              
      2. Run the Delete Linode Playbook:

        ansible-playbook ~/development/linode_delete.yml
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      Deploy Persistent Volume Claims with the Linode Block Storage CSI Driver


      Updated by Linode Written by Linode Community

      What is the Linode Block Storage CSI Driver?

      The Container Storage Interface (CSI) defines a standard that storage providers can use to expose block and file storage systems to container orchestration systems. Linode’s Block Storage CSI driver follows this specification to allow container orchestration systems, like Kubernetes, to use Block Storage Volumes to persist data despite a Pod’s lifecycle. A Block Storage Volume can be attached to any Linode to provide additional storage.

      Before You Begin

      • This guide assumes you have a working Kubernetes cluster running on Linode. You can deploy a Kubernetes cluster on Linode in the following ways:

        1. Use Linode’s k8s-alpha CLI to deploy a Kubernetes cluster via the command line.

        2. Deploy a cluster using Terraform and the Linode Kubernetes Terraform installer.

        3. Use kubeadm to manually deploy a Kubernetes cluster on Linode. You can follow the Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode guide to do this.

        Note

        • If using the k8s-alpha CLI or the Linode Kubernetes Terraform installer methods to deploy a cluster, you can skip the Installing the CSI Driver section of this guide, since it will be automatically installed when you deploy a cluster.

          Move on to the Attach a Pod to the Persistent Volume Claim section to learn how to consume a Block Storage volume as part of your deployment.

      • The Block Storage CSI supports Kubernetes version 1.13 or higher. To check the version of Kubernetes you are running, you can issue the following command:

        kubectl version
        

      Installing the CSI Driver

      Create a Kubernetes Secret

      A secret in Kubernetes is any token, password, or credential that you want Kubernetes to store for you. In the case of the Block Storage CSI, you’ll want to store an API token, and for convenience, the region you would like your Block Storage Volume to be placed in.

      Note

      Your Block Storage Volume must be in the same data center as your Kubernetes cluster.

      To create an API token:

      1. Log into the Linode Cloud Manager.

      2. Navigate to your account profile by clicking on your username at the top of the page and selecting My Profile. On mobile screen resolutions, this link is in the sidebar navigation.

      3. Click on the API Tokens tab.

      4. Click on Add a Personal Access Token. The Add Personal Access Token menu appears.

      5. Provide a label for the token. This is how you will reference your token within the Cloud Manager.

      6. Set an expiration date for the token with the Expiry dropdown.

      7. Set your permissions for the token. You will need Read/Write access for Volumes, and Read/Write access for Linodes.

      8. Click Submit.

      Your access token will appear on the screen. Copy this down somewhere safe, as once you click OK you will not be able to retrieve the token again, and will need to create a new one.

      Once you have your API token, it’s time to create your secret.

      1. Run the following command to enter your token into memory:

        read -s -p "Linode API Access Token: " LINODE_TOKEN
        

        Press enter, and then paste in your API token.

      2. Run the following command to enter your region into memory:

        read -p "Linode Region of Cluster: " LINODE_REGION
        

        You can retrieve a full list of regions by using the Linode CLI:

        linode-cli regions list
        

        For example, if you want to use the Newark, NJ, USA data center, you would use us-east as your region.

      3. Create the secret by piping in the following secret manifest to the kubectl create command. Issue the following here document:

        cat <<EOF | kubectl create -f -
        
      4. Now, paste in the following manifest and press enter:

        apiVersion: v1
        kind: Secret
        metadata:
          name: linode
          namespace: kube-system
        stringData:
          token: "$LINODE_TOKEN"
          region: "$LINODE_REGION"
        EOF
        

      You can check to see if the command was successful by running the get secrets command in the kube-system namespaces and looking for linode in the NAME column of the output:

      kubectl -n kube-system get secrets
      

      You should see output similar to the following:

      NAME                                             TYPE                                  DATA   AGE
      ...
      job-controller-token-6zzkw                       kubernetes.io/service-account-token   3      43h
      kube-proxy-token-td7k8                           kubernetes.io/service-account-token   3      43h
      linode                                           Opaque                                2      42h
      ...
      

      You are now ready to install the Block Storage CSI driver.

      Apply CSI Driver to your Cluster

      To install the Block Storage CSI driver, use the apply command and specify the following URL:

      kubectl apply -f https://raw.githubusercontent.com/linode/linode-blockstorage-csi-driver/master/pkg/linode-bs/deploy/releases/linode-blockstorage-csi-driver-v0.0.3.yaml
      

      The above file concatenates a few files needed to run the Block Storage CSI driver, including the volume attachment, driver registration, and provisioning sidecars. To see these files individually, visit the project’s GitHub repository.

      Once you have the Block Storage CSI driver installed, you are ready to provision a Persistent Volume Claim.

      Create a Persistent Volume Claim

      Caution

      The instructions in this section will create a Block Storage volume billable resource on your Linode account. A single volume can range from 10 GiB to 10,000 GiB in size and costs $0.10/GiB per month or $0.00015/GiB per hour. If you do not want to keep using the Block Storage volume that you create, be sure to delete it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

      A Persistent Volume Claim (PVC) consumes a Block Storage Volume. To create a PVC, create a manifest file with the following YAML:

      pvc.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-example
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: linode-block-storage

      This PVC represents a Block Storage Volume. Because Block Storage Volumes have a minimum size of 10 gigabytes, the storage has been set to 10Gi. If you choose a size smaller than 10 gigabytes, the PVC will default to 10 gigabytes.

      Currently the only mode supported by the Linode Block Storage CSI driver is ReadWriteOnce, meaning that it can only be connected to one Kubernetes node at a time.

      To create the PVC in Kubernetes, issue the create command and pass in the pvc.yaml file:

      kubectl create -f pvc.yaml
      

      After a few moments your Block Storage Volume will be provisioned and your Persistent Volume Claim will be ready to use.

      You can check the status of your PVC by issuing the following command:

      kubectl get pvc
      

      You should see output like the following:

      NAME          STATUS   VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS           AGE
      pvc-example   Bound    pvc-0e95b811652111e9   10Gi       RWO            linode-block-storage   2m
      

      Now that you have a PVC, you can attach it to a Pod.

      Attach a Pod to the Persistent Volume Claim

      Now you need to instruct a Pod to use the Persistent Volume Claim. For this example, you will create a Pod that is running an ownCloud container, which will use the PVC.

      To create a pod that will use the PVC:

      1. Create a manifest file for the Pod and give it the following YAML:

        owncloud-pod.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        apiVersion: v1
        kind: Pod
        metadata:
          name: owncloud
          labels:
            app: owncloud
        spec:
          containers:
            - name: owncloud
              image: owncloud/server
              ports:
                - containerPort: 8080
              volumeMounts:
              - mountPath: "/mnt/data/files"
                name: pvc-example
          volumes:
            - name: pvc-example
              persistentVolumeClaim:
                claimName: pvc-example

        This Pod will run the owncloud/server Docker container image. Because ownCloud stores its files in the /mnt/data/files directory, this owncloud-pod.yaml manifest instructs the ownCloud container to create a mount point at that file path for your PVC.

        In the volumes section of the owncloud-pod.yaml, it is important to set the claimName to the exact name you’ve given your PersistentVolumeClaim in its manifest’s metadata. In this case, the name is pvc-example.

      2. Use the create command to create the ownCloud Pod:

        kubectl create -f owncloud-pod.yaml
        
      3. After a few moments your Pod should be up and running. To see the status of your Pod, issue the get pods command:

        kubectl get pods
        

        You should see output like the following:

        NAME       READY   STATUS    RESTARTS   AGE
        owncloud   1/1     Running   0          2m
        
      4. To list the contents of the /mnt/data/files directory within the container, which is the mount point for your PVC, issue the following command on your container:

        kubectl exec -it owncloud -- /bin/sh -c "ls /mnt/data/files"
        

        You should see output similar to the following:

        admin  avatars  files_external  index.html  owncloud.db  owncloud.log
        

        These files are created by ownCloud, and those files now live on your Block Storage Volume. The admin directory is the directory for the default user, and any files you upload to the admin account will appear in this folder.

      To complete the example, you should be able to access the ownCloud Pod via your browser. To accomplish this task, you will need to create a Service.

      1. Create a Service manifest file and copy in the following YAML:

        owncloud-service.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        
        kind: Service
        apiVersion: v1
        metadata:
          name: owncloud
        spec:
          selector:
            app: owncloud
          ports:
          - protocol: TCP
            port: 80
            targetPort: 8080
          type: NodePort

        Note

        The service manifest file will use the NodePort method to get external traffic to the ownCloud service. NodePort opens a specific port on all cluster Nodes and any traffic that is sent to this port is forwarded to the service. Kubernetes will choose the port to open on the nodes if you do not provide one in your service manifest file. It is recommended to let Kubernetes handle the assignment. Kubernetes will choose a port in the default range, 30000-32767.

        Alternatively, you could use the LoadBalancer service type, instead of NodePort, which will create Linode NodeBalancers that will direct traffic to the ownCloud Pods. Linode’s Cloud Controller Manager (CCM) is responsible for provisioning the Linode NodeBalancers. For more details, see the Kubernetes Cloud Controller Manager for Linode repository.

      2. Create the service in Kubernetes by using the create command and passing in the owncloud-service.yaml file you created in the previous step:

        kubectl create -f owncloud-service.yaml
        
      3. To retrieve the port that the ownCloud Pod is listening on, use the describe command on the newly created Service:

        kubectl describe service owncloud
        

        You should see output like the following:

        Name:                     owncloud
        Namespace:                default
        Labels:                   <none>
        Annotations:              <none>
        Selector:                 app=owncloud
        Type:                     NodePort
        IP:                       10.106.101.155
        Port:                     <unset>  80/TCP
        TargetPort:               8080/TCP
        NodePort:                 <unset>  30068/TCP
        Endpoints:                10.244.1.17:8080
        Session Affinity:         None
        External Traffic Policy:  Cluster
        Events:                   <none>
        

        Find the NodePort. In this example the port is 30068.

      4. Now you need to find out which Node your Pod is running on. Use the describe command on the Pod to find the IP address of the Node:

        kubectl describe pod owncloud
        

        You should see output like the following:

        Name:               owncloud
        Namespace:          default
        Priority:           0
        PriorityClassName:  <none>
        Node:               kube-node/192.0.2.155
        Start Time:         Mon, 22 Apr 2019 17:07:20 +0000
        Labels:             app=owncloud
        Annotations:        <none>
        Status:             Running
        IP:                 10.244.1.17
        

        The IP address of the Node in this example is 192.0.2.155. Your ownCloud Pod in this example would be accessible from http://192.9.2.155:30068.

      5. Navigate to the URL of the Node, including the NodePort you looked up in a previous step. You will be presented with the ownCloud log in page. You can log in with the username admin and the password admin.

      6. Upload a file. You will use this file to test the Persistent Volume Claim.

      7. The Persistent Storage Claim has been created and is using your Block Storage Volume. To prove this point, you can delete the ownCloud Pod and recreate it, and the Persistent Storage Claim will continue to house your data:

        kubectl delete pod owncloud
        
        kubectl create -f owncloud-pod.yaml
        

        Once the Pod has finished provisioning you can log back in to ownCloud and view the file you previously uploaded.

      You have successfully You have successfully created a Block Storage Volume tied to a Persistent Volume Claim and have mounted it with a container in a Pod.

      Delete a Persistent Volume Claim

      To delete the Block Storage volume created in this guide:

      1. First, delete the ownCloud Pod:

        kubectl delete pods owncloud
        
      2. Then, delete the persistent volume claim:

        kubectl delete pvc pvc-example
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Deploy Kubernetes on Linode with the k8s-alpha CLI


      Updated by Linode Written by Linode

      Caution

      This guide’s example instructions will create several billable resources on your Linode account. If you do not want to keep using the example cluster that you create, be sure to delete it when you have finished the guide.

      If you remove the resources afterward, you will only be billed for the hour(s) that the resources were present on your account. Consult the Billing and Payments guide for detailed information about how hourly billing works and for a table of plan pricing.

      What is the k8s-alpha CLI?

      The Linode k8s-alpha CLI is a plugin for the Linode CLI that offers quick, single-command deployments of Kubernetes clusters on your Linode account. When you have it installed, creating a cluster can be as simple as:

      linode-cli k8s-alpha create example-cluster
      

      The clusters that it creates are pre-configured with useful Linode integrations, like our CCM, CSI, and ExternalDNS plugins. As well, the Kubernetes metrics-server is pre-installed, so you can run kubectl top. Nodes in your clusters will also be labeled with the Linode Region and Linode Type, which can also be used by Kubernetes controllers for the purposes of scheduling pods.

      What are Linode’s CCM, CSI, and ExternalDNS plugins?

      The CCM (Cloud Controller Manager), CSI (Container Storage Interface), and ExternalDNS plugins are Kubernetes addons published by Linode. You can use them to create NodeBalancers, Block Storage Volumes, and DNS records through your Kubernetes manifests.

      The k8s-alpha CLI will create two kinds of nodes on your account:

      • Master nodes will run the components of your Kubernetes control plane, and will also run etcd.

      • Worker nodes will run your workloads.

      These nodes will all exist as billable services on your account. You can specify how many master and worker nodes are created and also your nodes’ Linode plan and the data center they are located in.

      Alternatives for Creating Clusters

      Another easy way to create clusters is with Rancher. Rancher is a web application that provides a GUI interface for cluster creation and for management of clusters. Rancher also provides easy interfaces for deploying and scaling apps on your clusters, and it has a built-in catalog of curated apps to choose from.

      To get started with Rancher, review our How to Deploy Kubernetes on Linode with Rancher 2.2 guide. Rancher is capable of importing clusters that were created outside of it, so you can still use it even if you create your clusters through the k8s-alpha CLI or some other means.

      Beginners Resources

      If you haven’t used Kubernetes before, we recommend reading through our introductory guides on the subject:

      Before You Begin

      1. You will need to have a personal access token for Linode’s API. If you don’t have one already, follow the Get an Access Token section of our API guide and create a token with read/write permissions.

      2. If you do not already have a public-private SSH key pair, you will need to generate one. Follow the Generate a Key Pair section of our Public Key Authentication guide for instructions.

        Note

        If you’re unfamiliar with the concept of public-private key pairs, the introduction to our Public Key Authentication guide explains what they are.

      Install the k8s-alpha CLI

      The k8s-alpha CLI is bundled with the Linode CLI, and using it requires the installation and configuration of a few dependencies:

      • Terraform: The k8s-alpha CLI creates clusters by defining a resource plan in Terraform and then having Terraform create those resources. If you’re interested in how Terraform works, you can review our Beginner’s Guide to Terraform, but doing so is not required to use the k8s-alpha CLI.

      • kubectl: kubectl is the client software for Kubernetes, and it is used to interact with your Kubernetes cluster’s API.

      • SSH agent: Terraform will rely on public-key authentication to connect to the Linodes that it creates, and you will need to configure your SSH agent on your computer with the keys that Terraform should use.

      Install the Linode CLI

      Follow the Install the CLI section of our CLI guide to install the Linode CLI. If you already have the CLI, upgrade it to the latest version available:

      pip install --upgrade linode-cli
      

      Install Terraform

      Follow the instructions in the Install Terraform section of our Use Terraform to Provision Linode Environments guide.

      Install kubectl

      macOS:

      Install via Homebrew:

      brew install kubernetes-cli
      

      If you don’t have Homebrew installed, visit the Homebrew home page for instructions. Alternatively, you can manually install the binary; visit the Kubernetes documentation for instructions.

      Linux:

      1. Download the latest Kubernetes release:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
        
      2. Make the downloaded file executable:

        chmod +x ./kubectl
        
      3. Move the command into your PATH:

        sudo mv ./kubectl /usr/local/bin/kubectl
        

      Note

      Windows:

      Visit the Kubernetes documentation for a link to the most recent Windows release.

      Configure your SSH Agent

      Your SSH key pair is stored in your home directory (or another location), but the k8s-alpha CLI’s Terraform implementation will not be able to reference your keys without first communicating your keys to Terraform. To communicate your keys to Terraform, you’ll first start the ssh-agent process. ssh-agent will cache your private keys for other processes, including keys that are passphrase-protected.

      Linux: Run the following command; if you stored your private key in another location, update the path that’s passed to ssh-add accordingly:

      eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
      

      Note

      You will need to run all of your k8s-alpha CLI commands from the terminal that you start the ssh-agent process in. If you start a new terminal, you will need to run the commands in this step again before using the k8s-alpha CLI.

      macOS: macOS has an ssh-agent process that persists across all of your terminal sessions, and it can store your private key passphrases in the operating system’s Keychain Access service.

      1. Update your ~/.ssh/config SSH configuration file. This configuration will add keys to the persistent agent and store passphrases in the OS keychain:

        ~/.ssh/config
        1
        2
        3
        4
        
        Host *
          AddKeysToAgent yes
          UseKeychain yes
          IdentityFile ~/.ssh/id_rsa

      Note

      Although kubectl should be used in all cases possible to interact with nodes in your cluster, the key pair cached in the ssh-agent process will enable you to access individual nodes via SSH as the core user.

      1. Add your key to the ssh-agent process:

        ssh-add -K ~/.ssh/id_rsa
        

      Create a Cluster

      1. To create your first cluster, run:

        linode-cli k8s-alpha create example-cluster
        
      2. Your terminal will show output related to the Terraform plan for your cluster. The output will halt with the following messages and prompt:

        Plan: 5 to add, 0 to change, 0 to destroy.
        
        Do you want to perform these actions in workspace "example-cluster"?
          Terraform will perform the actions described above.
          Only 'yes' will be accepted to approve.
        
          Enter a value:
        

        Note

        Your Terraform configurations will be stored under ~/.k8s-alpha-linode/

      3. Enter yes at the Enter a value: prompt. The Terraform plan will be applied over the next few minutes.

        Note

        You may see an error like the following:

          
        Error creating a Linode Instance: [400] Account Limit reached. Please open a support ticket.
        
        

        If this appears, then you have run into a limit on the number of resources allowed on your Linode account. If this is the case, or if your nodes do not appear in the Linode Cloud Manager as expected, contact Linode Support. This limit also applies to Block Storage Volumes and NodeBalancers, which some of your cluster app deployments may try to create.

      4. When the operation finishes, you will see options like the following:

        Apply complete! Resources: 5 added, 0 changed, 0 destroyed.
        Switched to context "example-cluster-4-kacDTg9RmZK@example-cluster-4".
        Your cluster has been created and your kubectl context updated.
        
        Try the following command:
        kubectl get pods --all-namespaces
        
        Come hang out with us in #linode on the Kubernetes Slack! http://slack.k8s.io/
        
      5. If you visit the Linode Cloud Manager, you will see your newly created cluster nodes on the Linodes page. By default, your Linodes will be created under the region and Linode plan that you have set as the default for your Linode CLI. To set new defaults for your Linode CLI, run:

        linode-cli configure
        

        The k8s-alpha CLI will conform to your CLI defaults, with the following exceptions:

        • If you set a default plan size smaller than Linode 4GB, the k8s-alpha CLI will create your master node(s) on the Linode 4GB plan, which is the minimum recommended for master nodes. It will still create your worker nodes using your default plan.

        • The k8s-alpha CLI will always create nodes running CoreOS (instead of the default distribution that you set).

      6. The k8s-alpha CLI will also update your kubectl client’s configuration (the kubeconfig file) to allow immediate access to the cluster. Review the Manage your Clusters with kubectl section for further instructions.

      Cluster Creation Options

      The following optional arguments are available:

      linode-cli k8s-alpha create example-cluster-2 --node-type g6-standard-1 --nodes 6 --master-type g6-standard-4 --region us-east --ssh-public-key $HOME/.ssh/id_rsa.pub
      
      Argument                                           Description
      --node-type TYPE The Linode Type ID for cluster worker nodes (which you can retrieve by running linode-cli linodes types).
      --nodes COUNT The number of Linodes to deploy as Nodes in the cluster (default 3).
      --master-type TYPE The Linode Type ID for cluster master nodes (which you can retrieve by running linode-cli linodes types).
      --region REGION The Linode Region ID in which to deploy the cluster (which you can retrieve by running linode-cli regions list).
      --ssh-public-key KEYPATH The path to your public key file which will be used to access Nodes during initial provisioning only! The keypair must be added to an ssh-agent (default $HOME/.ssh/id_rsa.pub).

      Delete a Cluster

      1. To delete a cluster, run the delete command with the name of your cluster:

        linode-cli k8s-alpha delete example-cluster
        
      2. Your terminal will show output from Terraform that describes the deletion operation. The output will halt with the following messages and prompt:

        Plan: 0 to add, 0 to change, 5 to destroy.
        
        Do you really want to destroy all resources in workspace "example-cluster"?
          Terraform will destroy all your managed infrastructure, as shown above.
          There is no undo. Only 'yes' will be accepted to confirm.
        
          Enter a value:
        
      3. Enter yes at the Enter a value: prompt. The nodes in your cluster will be deleted over the next few minutes.

      4. You should also login to the Linode Cloud Manager and confirm that any Volumes and NodeBalancers created by any of your cluster app deployments.

      5. Deleting the cluster will not remove the kubectl client configuration that the CLI inserted into your kubeconfig file. Review the Remove a Cluster’s Context section if you’d like to remove this information.

      Manage your Clusters with kubectl

      The k8s-alpha CLI will automatically configure your kubectl client to connect to your cluster. Specifically, this connection information is stored in your kubeconfig file. The path for this file is normally ~/.kube/config.

      Use the kubectl client to interact with your cluster’s Kubernetes API. This will work in the same way as with any other cluster. For example, you can get all the pods in your cluster:

      kubectl get pods --all-namespaces
      

      Review the Kubernetes documentation for more information about how to use kubectl.

      Switch between Cluster Contexts

      If you have more than one cluster set up, you can switch your kubectl client between them. To list all of your cluster contexts:

      kubectl config get-contexts
      

      An asterisk will appear before the current context:

      CURRENT   NAME                                                      CLUSTER                 AUTHINFO                            NAMESPACE
      *         example-cluster-kat7BqBBgU8@example-cluster               example-cluster         example-cluster-kat7BqBBgU8
                example-cluster-2-kacDTg9RmZK@example-cluster-2           example-cluster-2       example-cluster-2-kacDTg9RmZK
      

      To switch to another context, use the use-context subcommand and pass the value under the NAME column:

      kubectl config use-context example-cluster-2-kacDTg9RmZK@example-cluster-2
      

      All kubectl commands that you issue will now apply to the cluster you chose.

      Remove a Cluster’s Context

      When you delete a cluster with the k8s-alpha CLI, its connection information will persist in your local kubeconfig file, and it will still appear when you run kubectl config get-contexts. To remove this connection data, run the following commands:

      kubectl config delete-cluster example-cluster
      kubectl config delete-context example-cluster-kat7BqBBgU8@example-cluster
      kubectl config unset users.example-cluster-kat7BqBBgU8
      
      • For the delete-cluster subcommand, supply the value that appears under the CLUSTER column in the output from get-contexts.

      • For the delete-context subcommand, supply the value that appears under the NAME column in the output from get-contexts.

      • For the unset subcommand, supply users.<AUTHINFO>, where <AUTHINFO> is the value that appears under the AUTHINFO column in the output from get-contexts.

      Next Steps

      Now that you have a cluster up and running, you’re ready to start deploying apps to it. Review our other Kubernetes guides for help with deploying software and managing your cluster:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link