One place for hosting & domains

      A Beginner's Guide to LXD: Setting Up an Apache Webserver In a Container


      Updated by Linode Contributed by Simos Xenitellis

      Access an Apache Web Server Inside a LXD Container

      What is LXD?

      LXD (pronounced “Lex-Dee”) is a system container manager build on top of LXC (Linux Containers) that is currently supported by Canonical. The goal of LXD is to provide an experience similar to a virtual machine but through containerization rather than hardware virtualization. Compared to Docker for delivering applications, LXD offers nearly full operating-system functionality with additional features such as snapshots, live migrations, and storage management.

      The main benefits of LXD are the support of high density containers and the performance it delivers compared to virtual machines. A computer with 2GB RAM can adequately support half a dozen containers. In addition, LXD officially supports the container images of major Linux distributions. We can choose the Linux distribution and version to run in the container.

      This guide covers how to install and setup LXD 3 on a Linode and how to setup an Apache Web server in a container.

      Note

      For simplicity, the term container is used throughout this guide to describe the LXD system containers.

      Before You Begin

      1. Complete the Getting Started guide. Select a Linode with at least 2GB of RAM memory, such as the Linode 2GB. Specify the Ubuntu 19.04 distribution. You may specify a different Linux distribution, as long as there is support for snap packages (snapd); see the More Information for more details.

      2. This guide will use sudo wherever necessary. Follow the Securing Your Server guide to create a limited (non-root) user account, harden SSH access, and remove unnecessary network services.

      3. Update your system:

        sudo apt update && sudo apt upgrade
        

      Configure the Snap Package Support

      LXD is available as a Debian package in the long-term support (LTS) versions of Ubuntu, such as Ubuntu 18.04 LTS. For other versions of Ubuntu and other distributions, LXD is available as a snap package. Snap packages are universal packages because there is a single package file that works on any supported Linux distributions. See the More Information section for more details on what a snap package is, what Linux distributions are supported, and how to set it up.

      1. Verify that snap support is installed correctly. The following command either shows that there are no snap packages installed, or that some are.

        snap list
        
          
        No snaps are installed yet. Try 'snap install hello-world'.
        
        
      2. View the details of the LXD snap package lxd. The output below shows that, currently, the latest version of LXD is 3.12 in the default stable channel. This channel is updated often with new features. There are also other channels such as the 3.0/stable channel which has the LTS LXD version (supported along with Ubuntu 18.04, until 2023) and the 2.0/stable channel (supported along with Ubuntu 16.04, until 2021). We will be using the latest version of LXD from the default stable channel.

        snap info lxd
        
          
        name:      lxd
        summary:   System container manager and API
        publisher: Canonical✓
        contact:   https://github.com/lxc/lxd/issues
        license:   Apache-2.0
        description: |
          **LXD is a system container manager**
        
          With LXD you can run hundreds of containers of a variety of Linux
          distributions, apply resource limits, pass in directories, USB devices
          or GPUs and setup any network and storage you want.
        
          LXD containers are lightweight, secure by default and a great
          alternative to running Linux virtual machines.
        
        
          **Run any Linux distribution you want**
        
          Pre-made images are available for Ubuntu, Alpine Linux, ArchLinux,
          CentOS, Debian, Fedora, Gentoo, OpenSUSE and more.
        
          A full list of available images can be [found
          here](https://images.linuxcontainers.org)
        
          Can't find the distribution you want? It's easy to make your own images
          too, either using our `distrobuilder` tool or by assembling your own image
          tarball by hand.
        
        
          **Containers at scale**
        
          LXD is network aware and all interactions go through a simple REST API,
          making it possible to remotely interact with containers on remote
          systems, copying and moving them as you wish.
        
          Want to go big? LXD also has built-in clustering support,
          letting you turn dozens of servers into one big LXD server.
        
        
          **Configuration options**
        
          Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
           - criu.enable: Enable experimental live-migration support [default=false]
           - daemon.debug: Increases logging to debug level [default=false]
           - daemon.group: Group of users that can interact with LXD [default=lxd]
           - ceph.builtin: Use snap-specific ceph configuration [default=false]
           - openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
        
          [Documentation](https://lxd.readthedocs.io)
        snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
        channels:
          stable:        3.12        2019-04-16 (10601) 56MB -
          candidate:     3.12        2019-04-26 (10655) 56MB -
          beta:          ↑
          edge:          git-570aaa1 2019-04-27 (10674) 56MB -
          3.0/stable:    3.0.3       2018-11-26  (9663) 53MB -
          3.0/candidate: 3.0.3       2019-01-19  (9942) 53MB -
          3.0/beta:      ↑
          3.0/edge:      git-eaa62ce 2019-02-19 (10212) 53MB -
          2.0/stable:    2.0.11      2018-07-30  (8023) 28MB -
          2.0/candidate: 2.0.11      2018-07-27  (8023) 28MB -
          2.0/beta:      ↑
          2.0/edge:      git-c7c4cc8 2018-10-19  (9257) 26MB -
        
        
      3. Install the lxd snap package. Run the following command to install the snap package for LXD.

        sudo snap install lxd
        
          
        lxd 3.12 from Canonical✓ installed
        
        

      You can verify that the snap package has been installed by running snap list again. The core snap package is a prerequisite for any system with snap package support. When you install your first snap package, core is installed and shared among all other snap packages that will get installed in the future.

          snap list
      
      
        
      Name  Version  Rev    Tracking  Publisher   Notes
      core  16-2.38  6673   stable    canonical✓  core
      lxd   3.12     10601  stable    canonical✓  -
      
      

      Initialize LXD

      1. Add your non-root Unix user to the lxd group:

        sudo usermod -a -G lxd username
        

        Note

        By adding the non-root Unix user account to the lxd group, you are able to run any lxc commands without prepending sudo. Without this addition, you would have needed to prepend sudo to each lxc command.

      2. Start a new SSH session for the previous change to take effect. For example, log out and log in again.

      3. Verify the available free disk space:

        df -h /
        
          
        Filesystem      Size  Used Avail Use% Mounted on
        /dev/sda         49G  2.0G   45G   5% /
        
        

        In this case there is 45GB of free disk space. LXD requires at least 15GB of space for the storage needs of containers. We will allocate 15GB of space for LXD, leaving 30GB of free space for the needs of the server.

      4. Run lxd init to initialize LXD:

        sudo lxd init
        

        You will be prompted several times during the initialization process. Choose the defaults for all options.

          
        Would you like to use LXD clustering? (yes/no) [default=no]:
        Do you want to configure a new storage pool? (yes/no) [default=yes]:
        Name of the new storage pool [default=default]:
        Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
        Create a new ZFS pool? (yes/no) [default=yes]:
        Would you like to use an existing block device? (yes/no) [default=no]:
        Size in GB of the new loop device (1GB minimum) [default=15GB]:
        Would you like to connect to a MAAS server? (yes/no) [default=no]:
        Would you like to create a new local network bridge? (yes/no) [default=yes]:
        What should the new bridge be called? [default=lxdbr0]:
        What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        Would you like LXD to be available over the network? (yes/no) [default=no]:
        Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
        Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
        
        

      Apache Web Server with LXD

      This section will create a container, install the Apache web server, and add the appropriate iptables rules in order to expose post 80.

      1. Launch a new container:

        lxc launch ubuntu:18.04 web
        
      2. Update the package list in the container.

        lxc exec web -- apt update
        
      3. Install the Apache in the LXD container.

        lxc exec web -- apt install apache2
        
      4. Get a shell in the LXD container.

        lxc exec web -- sudo --user ubuntu --login
        
      5. Edit the default web page for Apache to make a reference that it runs inside a LXD container.

        sudo nano /var/www/html/index.html
        

        Change the line It works! (line number 224) to It works inside a LXD container!. Then, save and exit.

      6. Exit back to the host. We have made all the necessary changes to the container.

        exit
        
      7. Add a LXD proxy device to redirect connections from the internet to port 80 (HTTP) on the server to port 80 at this container.

        sudo lxc config device add web myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
        

        Note

        In recent versions of LXD, you need to specify an IP address (such as 127.0.0.1) instead of a hostname (such as localhost). If your container already has a proxy device that uses hostnames, you can edit the container configuration to replace with IP addresses by running lxc config edit web.

      8. From your local computer, navigate to your Linode’s public IP address in a web browser. You should see the default Apache page:

        Web page of Apache server running in a container

      Common LXD Commands

      • List all containers:

        lxc list
        
          
        To start your first container, try: lxc launch ubuntu:18.04
        
        +------+-------+------+------+------+-----------+
        | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
        +------+-------+------+------+------+-----------+
        
        
      • List all available repositories of container images:

        lxc remote list
        
          
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        |      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | local (default) | unix://                                  | lxd           | file access | NO     | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        
        

        The repository ubuntu has container images of Ubuntu versions. The images repository has container images of a large number of different Linux distributions. The ubuntu-daily has daily container images to be used for testing purposes. The local repository is the LXD server that we have just installed. It is not public and can be used to store your own container images.

      • List all available container images from a repository:

        lxc image list ubuntu:
        
          
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        |      ALIAS       | FINGERPRINT  | PUBLIC |                  DESCRIPTION                  |  ARCH   |   SIZE   |          UPLOAD DATE          |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | b (11 more)      | 5b72cf46f628 | yes    | ubuntu 18.04 LTS amd64 (release) (20190424)   | x86_64  | 180.37MB | Apr 24, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | c (5 more)       | 4716703f04fc | yes    | ubuntu 18.10 amd64 (release) (20190402)       | x86_64  | 313.29MB | Apr 2, 2019 at 12:00am (UTC)  |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | d (5 more)       | faef94acf5f9 | yes    | ubuntu 19.04 amd64 (release) (20190417)       | x86_64  | 322.56MB | Apr 17, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        .....................................................................
        
        

        Note

        The first two columns for the alias and fingerprint provide an identifier that can be used to specify the container image when launching it.

        The output snippet shows the container images Ubuntu versions 18.04 LTS, 18.10, and 19.04. When creating a container we can just specify the short alias. For example, ubuntu:b means that the repository is ubuntu and the container image has the short alias b (for bionic, the codename of Ubuntu 18.04 LTS).

      • Get more information about a container image:

        lxc image info ubuntu:b
        
          
        Fingerprint: 5b72cf46f628b3d60f5d99af48633539b2916993c80fc5a2323d7d841f66afbe
        Size: 180.37MB
        Architecture: x86_64
        Public: yes
        Timestamps:
            Created: 2019/04/24 00:00 UTC
            Uploaded: 2019/04/24 00:00 UTC
            Expires: 2023/04/26 00:00 UTC
            Last used: never
        Properties:
            release: bionic
            version: 18.04
            architecture: amd64
            label: release
            serial: 20190424
            description: ubuntu 18.04 LTS amd64 (release) (20190424)
            os: ubuntu
        Aliases:
            - 18.04
            - 18.04/amd64
            - b
            - b/amd64
            - bionic
            - bionic/amd64
            - default
            - default/amd64
            - lts
            - lts/amd64
            - ubuntu
            - amd64
        Cached: no
        Auto update: disabled
        
        

        The output shows the details of the container image including all the available aliases. For Ubuntu 18.04 LTS, we can specify either b (for bionic, the codename of Ubuntu 18.04 LTS) or any other alias.

      • Launch a new container with the name mycontainer:

        lxc launch ubuntu:18.04 mycontainer
        
          
        Creating mycontainer
        Starting mycontainer
        
        
      • Check the list of containers to make sure the new container is running:

        lxc list
        
          
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        |    NAME     |  STATE  |         IPV4          |          IPV6             |    TYPE    | SNAPSHOTS |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        | mycontainer | RUNNING | 10.142.148.244 (eth0) | fde5:5d27:...:1371 (eth0) | PERSISTENT | 0         |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        
        
      • Execute basic commands in mycontainer:

        lxc exec mycontainer -- apt update
        lxc exec mycontainer -- apt upgrade
        

        Note

        The characters -- instruct the lxc command not to parse any more command-line parameters.

      • Open a shell session within mycontainer:

        lxc exec mycontainer -- sudo --login --user ubuntu
        
          
        To run a command as administrator (user "root"), use "sudo ".
        See "man sudo_root" for details.
        
        ubuntu@mycontainer:~$
        
        

        Note

        The Ubuntu container images have by default a non-root account with username ubuntu. This account can use sudo and does not require a password to perform administrative tasks.

        The sudo command provides a login to the existing account ubuntu.

      • View the container logs:

        lxc info mycontainer --show-log
        
      • Stop the container:

        lxc stop mycontainer
        
      • Remove the container:

        lxc delete mycontainer
        

        Note

        A container needs to be stopped before it can be deleted.

      Troubleshooting

      Error “unix.socket: connect: connection refused”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused
      
      

      This happens when the LXD service is not currently running. By default, the LXD service is running as soon as it is configured successfully. See Initialize LXD to configure LXD.

      Error “unix.socket: connect: permission denied”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
      
      

      This happens when your limited user account is not a member of the lxd group, or you did not log out and log in again so that the new group membership to the lxd group gets updated.

      If your user account is ubuntu, the following command shows whether you are a member of the lxd group:

          groups ubuntu
      
      
        
      ubuntu : ubuntu sudo lxd
      
      

      In this example, we are members of the lxd group and we just need to log out and log in again. If you are not a member of the lxd group, see Initialize LXD on how to make your limited account a member of the lxd group.

      Next Steps

      If you plan to use a single website, then a single proxy device to the website container will suffice. If you plan to use multiple websites, you may install virtual hosts inside the website container. If instead you would like to setup multiple websites on their own container, then you will need to set up a reverse proxy in a container. In that case, the proxy device would direct to the reverse proxy container to direct the connections to the individual websites containers.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to use the Linode Ansible Module to Deploy Linodes


      Updated by Linode Contributed by Linode

      Ansible is a popular open-source tool that can be used to automate common IT tasks, like cloud provisioning and configuration management. With Ansible’s 2.8 release, you can deploy Linode instances using our latest API (v4). Ansible’s linode_v4 module adds the functionality needed to deploy and manage Linodes via the command line or in your Ansible Playbooks. While the dynamic inventory plugin for Linode helps you source your Ansible inventory directly from the Linode API (v4).

      In this guide you will learn how to:

      • Deploy and manage Linodes using Ansible and the linode_v4 module.
      • Create an Ansible inventory for your Linode infrastructure using the dynamic inventory plugin for Linode.

      Caution

      This guide’s example instructions will create a 1 GB Nanode billable resource on your Linode account. If you do not want to keep using the Nanode that you create, be sure to delete the resource when you have finished the guide.

      If you remove the resource afterward, you will only be billed for the hour(s) that the resources were present on your account.

      Before You Begin

      Note

      Configure Ansible

      The Ansible configuration file is used to adjust Ansible’s default system settings. Ansible will search for a configuration file in the directories listed below, in the order specified, and apply the first configuration values it finds:

      • ANSIBLE_CONFIG environment variable pointing to a configuration file location. If passed, it will override the default Ansible configuration file.
      • ansible.cfg file in the current directory
      • ~/.ansible.cfg in the home directory
      • /etc/ansible/ansible.cfg

      In this section, you will create an Ansible configuration file and add options to disable host key checking, and to whitelist the Linode inventory plugin. The Ansible configuration file will be located in a development directory that you create, however, it could exist in any of the locations listed above. See Ansible’s official documentation for a full list of available configuration settings.

      Caution

      When storing your Ansible configuration file, ensure that its corresponding directory does not have world-writable permissions. This could pose a security risk that allows malicious users to use Ansible to exploit your local system and remote infrastructure. At minimum, the directory should restrict access to particular users and groups. For example, you can create an ansible group, only add privileged users to the ansible group, and update the Ansible configuration file’s directory to have 764 permissions. See the Linux Users and Groups guide for more information on permissions.
      1. In your home directory, create a directory to hold all of your Ansible related files and move into the directory:

        mkdir development && cd development
        
      2. Create the Ansible configuration file, ansible.cfg in the development directory and add the host_key_checking and enable_plugins options.

        ~/development/ansible.cfg
        1
        2
        3
        4
        5
        6
        
        [defaults]
        host_key_checking = False
        VAULT_PASSWORD_FILE = ./vault-pass
        [inventory]
        enable_plugins = linode
              
        • host_key_checking = False will allow Ansible to SSH into hosts without having to accept the remote server’s host key. This will disable host key checking globally.
        • VAULT_PASSWORD_FILE = ./vault-pass is used to specify a Vault password file to use whenever Ansible Vault requires a password. Ansible Vault offers several options for password management. To learn more password management, read Ansible’s Providing Vault Passwords documentation.
        • enable_plugins = linode enables the Linode dynamic inventory plugin.

      Create a Linode Instance

      You can now begin creating Linode instances using Ansible. In this section, you will create an Ansible Playbook that can deploy Linodes.

      Create your Linode Playbook

      1. Ensure you are in the development directory that you created in the Configure Ansible section:

        cd ~/development
        
      2. Using your preferred text editor, create the Create Linode Playbook file and include the following values:

        ~/development/linode_create.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        
        - name: Create Linode
          hosts: localhost
          vars_files:
              - ./group_vars/example_group/vars
          tasks:
          - name: Create a new Linode.
            linode_v4:
              label: "{{ label }}{{ 100 |random }}"
              access_token: "{{ token }}"
              type: g6-nanode-1
              region: us-east
              image: linode/debian9
              root_pass: "{{ password }}"
              authorized_keys: "{{ ssh_keys }}"
              group: example_group
              tags: example_group
              state: present
            register: my_linode
            
        • The Playbook my_linode contains the Create Linode play, which will be executed on hosts: localhost. This means the Ansible playbook will execute on the local system and use it as a vehicle to deploy the remote Linode instances.
        • The vars_files key provides the location of a local file that contains variable values to populate in the play. The value of any variables defined in the vars file will substitute any Jinja template variables used in the Playbook. Jinja template variables are any variables between curly brackets, like: {{ my_var }}.
        • The Create a new Linode task calls the linode_v4 module and provides all required module parameters as arguments, plus additional arguments to configure the Linode’s deployment. For details on each parameter, see the linode_v4 Module Parameters section.

          Note

          Usage of groups is deprecated, but still supported by Linode’s API v4. The Linode dynamic inventory module requires groups to generate an Ansible inventory and will be used later in this guide.
        • Theregister keyword defines a variable name, my_linode that will store linode_v4 module return data. For instance, you could reference the my_linode variable later in your Playbook to complete other actions using data about your Linode. This keyword is not required to deploy a Linode instance, but represents a common way to declare and use variables in Ansible Playbooks. The task in the snippet below will use Ansible’s debug module and the my_linode variable to print out a message with the Linode instance’s ID and IPv4 address during Playbook execution.

          1
          2
          3
          4
          5
          
          ...
          - name: Print info about my Linode instance
              debug:
                msg: "ID is {{ my_linode.instance.id }} IP is {{ my_linode.instance.ipv4 }}"
                  

      Create the Variables File

      In the previous section, you created the Create Linode Playbook to deploy Linode instances and made use of Jinja template variables. In this section, you will create the variables file to provide values to those template variables.

      1. Create the directory to store your Playbook’s variable files. The directory is structured to group your variable files by inventory group. This directory structure supports the use of file level encryption that Ansible Vault can detect and parse. Although it is not relevant to this guide’s example, it will be used as a best practice.

        mkdir -p ~/development/group_vars/example_group/
        
      2. Create the variables file and populate it with the example variables. You can replace the values with your own.

        ~/development/group_vars/example_group/vars
        1
        2
        3
        4
        
        ssh_keys: >
                ['ssh-rsa AAAAB3N..5bYqyRaQ== user@mycomputer', '~/.ssh/id_rsa.pub']
        label: simple-linode-
            
        • The ssh_keys example passes a list of two public SSH keys. The first provides the string value of the key, while the second provides a local public key file location.

          Configure your SSH Agent

          If your SSH Keys are passphrase-protected, you should add the keys to your SSH agent so that Ansible does not hang when running Playbooks on the remote Linode. The following instructions are for Linux systems:

          1. Run the following command; if you stored your private key in another location, update the path that’s passed to ssh-add accordingly:

            eval $(ssh-agent) && ssh-add ~/.ssh/id_rsa
            

            If you start a new terminal, you will need to run the commands in this step again before having access to the keys stored in your SSH agent.

        • label provides a label prefix that will be concatenated with a random number. This occurs when the Create Linode Playbook’s Jinja templating for the label argument is parsed (label: "{{ label }}{{ 100 |random }}").

      Encrypt Sensitive Variables with Ansible Vault

      Ansible Vault allows you to encrypt sensitive data, like passwords or tokens, to keep them from being exposed in your Ansible Playbooks or Roles. You will take advantage of this functionality to keep your Linode instance’s password and access_token encrypted within the variables file.

      Note

      Ansible Vault can also encrypt entire files containing sensitive values. View Ansible’s documentation on Vault for more information.
      1. Create your Ansible Vault password file and add your password to the file. Remember the location of the password file was configured in the ansible.cfg file in the Configure Ansible section of this guide.

        ~/development/vault-pass
        1
        2
        
        My.ANS1BLEvault-c00lPassw0rd
            
      2. Encrypt the value of your Linode’s root user password using Ansible Vault. Replace My.c00lPassw0rd with your own strong password that conforms to the root_pass parameter’s constraints.

        ansible-vault encrypt_string 'My.c00lPassw0rd' --name 'password'
        

        You will see a similar output:

          
              password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        Encryption successful
      3. Copy the generated output and add it to your vars file.

      4. Encrypt the value of your access token. Replace the value of 86210...1e1c6bd with your own access token.

        ansible-vault encrypt_string '86210...1e1c6bd' --name 'token'
        
      5. Copy the generated output and append it to the bottom of your vars file.

        The final vars file should resemble the example below:

        ~/development/group_vars/example_group/vars
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        
        ssh_keys: >
                ['ssh-rsa AAAAB3N..5bYqyRaQ== user@mycomputer', '~/.ssh/id_rsa.pub']
        label: simple-linode-
        password: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  30376134633639613832373335313062366536313334316465303462656664333064373933393831
                  3432313261613532346134633761316363363535326333360a626431376265373133653535373238
                  38323166666665376366663964343830633462623537623065356364343831316439396462343935
                  6233646239363434380a383433643763373066633535366137346638613261353064353466303734
                  3833
        token: !vault |
                  $ANSIBLE_VAULT;1.1;AES256
                  65363565316233613963653465613661316134333164623962643834383632646439306566623061
                  3938393939373039373135663239633162336530373738300a316661373731623538306164363434
                  31656434356431353734666633656534343237333662613036653137396235353833313430626534
                  3330323437653835660a303865636365303532373864613632323930343265343665393432326231
                  61313635653463333630636631336539643430326662373137303166303739616262643338373834
                  34613532353031333731336339396233623533326130376431346462633832353432316163373833
                  35316333626530643736636332323161353139306533633961376432623161626132353933373661
                  36663135323664663130
            

      Run the Ansible Playbook

      You are now ready to run the Create Linode Playbook. When you run the Playbook, a 1 GB Nanode will be deployed in the Newark data center. Note: you want to run Ansible commands from the directory where your ansible.cfg file is located.

      1. Run your playbook to create your Linode instances.

        ansible-playbook ~/development/linode_create.yml
        

        You will see a similar output:

        PLAY [Create Linode] *********************************************************************
        
        TASK [Gathering Facts] *******************************************************************
        ok: [localhost]
        
        TASK [Create a new Linode.] **************************************************************
        changed: [localhost]
        
        PLAY RECAP *******************************************************************************
        localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
        

      linode_v4 Module Parameters

      Parameter Data type/Status Usage
      access_token string, required Your Linode API v4 access token. The token should have permission to read and write Linodes. The token can also be specified by exposing the LINODE_ACCESS_TOKEN environment variable.
      authorized_keys list A list of SSH public keys or SSH public key file locations on your local system, for example, ['averylongstring','~/.ssh/id_rsa.pub']. The public key will be stored in the /root/.ssh/authorized_keys file on your Linode. Ansible will use the public key to SSH into your Linodes as the root user and execute your Playbooks.
      group string, deprecated The Linode instance’s group. Please note, group labelling is deprecated but still supported. The encouraged method for marking instances is to use tags. This parameter must be provided to use the Linode dynamic inventory module.
      image string The Image ID to deploy the Linode disk from. Official Linode Images start with linode/, while your private images start with private/. For example, use linode/ubuntu18.04 to deploy a Linode instance with the Ubuntu 18.04 image. This is a required parameter only when creating Linode instances.

      To view a list of all available Linode images, issue the following command:

      curl https://api.linode.com/v4/images.

      label string, required The Linode instance label. The label is used by the module as the main determiner for idempotence and must be a unique value.

      Linode labels have the following constraints:

      • Must start with an alpha character.
      • May only consist of alphanumeric characters, dashes (-), underscores (_) or periods (.).
      • Cannot have two dashes (–), underscores (__) or periods (..) in a row.

      region string The region where the Linode will be located. This is a required parameter only when creating Linode instances.

      To view a list of all available regions, issue the following command:

      curl https://api.linode.com/v4/regions.

      root_pass string The password for the root user. If not specified, will be generated. This generated password will be available in the task success JSON.

      The root password must conform to the following constraints:

      • May only use alphanumerics, punctuation, spaces, and tabs.
      • Must contain at least two of the following characters classes: upper-case letters, lower-case letters, digits, punctuation.

      state string, required The desired instance state. The accepted values are absent and present.
      tags list The user-defined labels attached to Linodes. Tags are used for grouping Linodes in a way that is relevant to the user.
      type string, The Linode instance’s plan type. The plan type determines your Linode’s hardware resources and its pricing.

      To view a list of all available Linode types including pricing and specifications for each type, issue the following command:

      curl https://api.linode.com/v4/linode/types.

      The Linode Dynamic Inventory Plugin

      Ansible uses inventories to manage different hosts that make up your infrastructure. This allows you to execute tasks on specific parts of your infrastructure. By default, Ansible will look in /etc/ansible/hosts for an inventory, however, you can designate a different location for your inventory file and use multiple inventory files that represent your infrastructure. To support infrastructures that shift over time, Ansible offers the ability to track inventory from dynamic sources, like cloud providers. The Ansible dynamic inventory plugin for Linode can be used to source your inventory from Linode’s API v4. In this section, you will use the Linode plugin to source your Ansible deployed Linode inventory.

      Note

      The dynamic inventory plugin for Linode was enabled in the Ansible configuration file created in the Configure Ansible section of this guide.

      Configure the Plugin

      1. Configure the Ansible dynamic inventory plugin for Linode by creating a file named linode.yml.

        ~/development/linode.yml
        1
        2
        3
        4
        5
        6
        7
        
        plugin: linode
        regions:
          - us-east
        groups:
          - example_group
        types:
          - g6-nanode-1
        • The configuration file will create an inventory for any Linodes on your account that are in the us-east region, part of the example_group group and of type g6-nanode-1. Any Linodes that are not part of the example_group group, but that fulfill the us-east region and g6-nanode-type type will be displayed as ungrouped. All other Linodes will be excluded from the dynamic inventory. For more information on all supported parameters, see the Plugin Parameters section.

      Run the Inventory Plugin

      1. Export your Linode API v4 access token to the shell environment. LINODE_ACCESS_TOKEN must be used as the environment variable name. Replace mytoken with your own access token.

        export LINODE_ACCESS_TOKEN='mytoken'
        
      2. Run the Linode dynamic inventory plugin.

        ansible-inventory -i ~/development/linode.yml --graph
        

        You should see a similar output. The output may vary depending on the Linodes already deployed to your account and the parameter values you pass.

        @all:
        |--@example_group:
        |  |--simple-linode-29
        

        For a more detailed output including all Linode instance configurations, issue the following command:

        ansible-inventory -i ~/development/linode.yml --graph --vars
        
      3. Before you can communicate with your Linode instances using the dynamic inventory plugin, you will need to add your Linode’s IPv4 address and label to your /etc/hosts file.

        The Linode Dynamic Inventory Plugin assumes that the Linodes in your account have labels that correspond to hostnames that are in your resolver search path, /etc/hosts. This means you will have to create an entry in your /etc/hosts file to map the Linode’s IPv4 address to its hostname.

        Note

        A pull request currently exists to support using a public IP, private IP or hostname. This change will enable the inventory plugin to be used with infrastructure that does not have DNS hostnames or hostnames that match Linode labels.

        To add your deployed Linode instance to the /etc/hosts file:

        • Retrieve your Linode instance’s IPv4 address:

          ansible-inventory -i ~/development/linode.yml --graph --vars | grep 'ipv4|simple-linode'
          

          Your output will resemble the following:

          |  |--simple-linode-36
          |  |  |--{ipv4 = [u'192.0.2.0']}
          |  |  |--{label = simple-linode-36}
          
        • Open the /etc/hosts file and add your Linode’s IPv4 address and label:

          /etc/hosts
          1
          2
          3
          
          127.0.0.1       localhost
          192.0.2.0 simple-linode-29
                    
      4. Verify that you can communicate with your grouped inventory by pinging the Linodes. The ping command will use the dynamic inventory plugin configuration file to target example_group. The u root option will run the command as root on the Linode hosts.

        ansible -m ping example_group -i ~/development/linode.yml -u root
        

        You should see a similar output:

        simple-linode-29 | SUCCESS => {
            "ansible_facts": {
                "discovered_interpreter_python": "/usr/bin/python"
            },
            "changed": false,
            "ping": "pong"
        }
        

      Plugin Parameters

      Parameter Data type/Status Usage
      access_token string, required Your Linode API v4 access token. The token should have permission to read and write Linodes. The token can also be specified by exposing the LINODE_ACCESS_TOKEN environment variable.
      plugin string, required The plugin name. The value must always be linode in order to use the dynamic inventory plugin for Linode.
      regions list The Linode region with which to populate the inventory. For example, us-east is possible value for this parameter.

      To view a list of all available Linode images, issue the following command:

      curl https://api.linode.com/v4/images.

      types list The Linode type with which to populate the inventory. For example, g6-nanode-1 is a possible value for this parameter.

      To view a list of all available Linode types including pricing and specifications for each type, issue the following command:

      curl https://api.linode.com/v4/linode/types.

      groups list The Linode group with which to populate the inventory. Please note, group labelling is deprecated but still supported. The encouraged method for marking instances is to use tags. This parameter must be provided to use the Linode dynamic inventory module.

      Delete Your Resources

      1. To delete the Linode instance created in this guide, create a Delete Linode Playbook with the following content in the example. Replace the value of label with your Linode’s label:

        ~/development/linode_delete.yml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        
        - name: Delete Linode
          hosts: localhost
          vars_files:
            - ./group_vars/example_group/vars
          tasks:
          - name: Delete your Linode Instance.
            linode_v4:
              label: simple-linode-29
              state: absent
              
      2. Run the Delete Linode Playbook:

        ansible-playbook ~/development/linode_delete.yml
        

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Scale a Node.js Application with MongoDB on Kubernetes Using Helm


      Introduction

      Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

      When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

      In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

      Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

      Navigate to the node_project directory:

      The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

      For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      When we deploy the Helm mongodb-replicaset chart, it will create:

      • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.
      • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

      For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

      The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

      Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node's process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

      The constants for the connection URI and the URI string itself currently look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      ...
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      ...
      

      In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

      Add MONGO_REPLICASET to both the URI constant object and the connection string:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB,
        MONGO_REPLICASET
      } = process.env;
      
      ...
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
      ...
      

      Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

      Save and close the file when you are finished editing.

      With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

      Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

      • docker build -t your_dockerhub_username/node-replicas .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-replicas

      You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

      Step 2 — Creating Secrets for the MongoDB Replica Set

      The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

      • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.
      • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

      With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

      First, let's create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

      • openssl rand -base64 756 > key.txt

      The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

      You can now create a Secret called keyfilesecret using this file with kubectl create:

      • kubectl create secret generic keyfilesecret --from-file=key.txt

      This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

      You will see the following output indicating that your Secret has been created:

      Output

      secret/keyfilesecret created

      Remove key.txt:

      Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

      Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

      Convert your database username:

      • echo -n 'your_database_username' | base64

      Note down the value you see in the output.

      Next, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

      ~/node_project/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: mongo-secret
      data:
        user: your_encoded_username
        password: your_encoded_password
      

      Here, we're using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

      Save and close the file when you are finished editing.

      Create the Secret object with the following command:

      • kubectl create -f secret.yaml

      You will see the following output:

      Output

      secret/mongo-secret created

      Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

      With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

      Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment

      Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we've just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

      Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

      • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.
      • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.
      • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

      Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

      Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage — which we can check by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 21m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

      You will set values in this file that will do the following:

      • Enable authorization.
      • Reference your keyfilesecret and mongo-secret objects.
      • Specify 1Gi for your PersistentVolumes.
      • Set your replica set name to db.
      • Specify 3 replicas for the set.
      • Pin the mongo image to the latest version at the time of writing: 4.1.9.

      Paste the following code into the file:

      ~/node_project/mongodb-values.yaml

      replicas: 3
      port: 27017
      replicaSetName: db
      podDisruptionBudget: {}
      auth:
        enabled: true
        existingKeySecret: keyfilesecret
        existingAdminSecret: mongo-secret
      imagePullSecrets: []
      installImage:
        repository: unguiculus/mongodb-install
        tag: 0.7
        pullPolicy: Always
      copyConfigImage:
        repository: busybox
        tag: 1.29.3
        pullPolicy: Always
      image:
        repository: mongo
        tag: 4.1.9
        pullPolicy: Always
      extraVars: {}
      metrics:
        enabled: false
        image:
          repository: ssalaues/mongodb-exporter
          tag: 0.6.1
          pullPolicy: IfNotPresent
        port: 9216
        path: /metrics
        socketTimeout: 3s
        syncTimeout: 1m
        prometheusServiceDiscovery: true
        resources: {}
      podAnnotations: {}
      securityContext:
        enabled: true
        runAsUser: 999
        fsGroup: 999
        runAsNonRoot: true
      init:
        resources: {}
        timeout: 900
      resources: {}
      nodeSelector: {}
      affinity: {}
      tolerations: []
      extraLabels: {}
      persistentVolume:
        enabled: true
        #storageClass: "-"
        accessModes:
          - ReadWriteOnce
        size: 1Gi
        annotations: {}
      serviceAnnotations: {}
      terminationGracePeriodSeconds: 30
      tls:
        enabled: false
      configmap: {}
      readinessProbe:
        initialDelaySeconds: 5
        timeoutSeconds: 1
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      livenessProbe:
        initialDelaySeconds: 30
        timeoutSeconds: 5
        failureThreshold: 3
        periodSeconds: 10
        successThreshold: 1
      

      The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

      Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

      To learn more about the other parameters included in the file, see the configuration table included with the repo.

      Save and close the file when you are finished editing.

      Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

      This will get the latest chart information from the stable repository.

      Finally, install the chart with the following command:

      • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

      Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

      • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

      Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we've specified. We've pointed to these options by including the -f flag and our mongodb-values.yaml file.

      Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

      Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

      Output

      NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

      You can now check on the creation of your Pods with the following command:

      You will see output like the following as the Pods are being created:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

      The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod's containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

      Once the Pods have been created and all of their associated containers are running, you will see this output:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

      In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

      Output

      NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

      This means that the first member of our StatefulSet will have the following DNS entry:

      mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local
      

      Because we need our application to connect to each MongoDB instance, it's essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

      With your database instances up and running, you are ready to create the chart for your Node application.

      Step 4 — Creating a Custom Application Chart and Configuring Parameters

      We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

      First, create a new chart directory called nodeapp with the following command:

      This will create a directory called nodeapp in your ~/node_project folder with the following resources:

      • A Chart.yaml file with basic information about your chart.
      • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.
      • A .helmignore file with file and directory patterns that will be ignored when packaging charts.
      • A templates/ directory with the template files that will generate Kubernetes manifests.
      • A templates/tests/ directory for test files.
      • A charts/ directory for any charts that this chart depends on.

      The first file we will modify out of these default files is values.yaml. Open that file now:

      The values that we will set here include:

      • The number of replicas.
      • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.
      • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.
      • The targetPort to specify the port on the Pod where our application will be exposed.

      We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

      Configure the following values in the values.yaml file:

      ~/node_project/nodeapp/values.yaml

      # Default values for nodeapp.
      # This is a YAML-formatted file.
      # Declare variables to be passed into your templates.
      
      replicaCount: 3
      
      image:
        repository: your_dockerhub_username/node-replicas
        tag: latest
        pullPolicy: IfNotPresent
      
      nameOverride: ""
      fullnameOverride: ""
      
      service:
        type: LoadBalancer
        port: 80
        targetPort: 8080
      ...
      

      Save and close the file when you are finished editing.

      Next, open a secret.yaml file in the nodeapp/templates directory:

      • nano nodeapp/templates/secret.yaml

      In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

      Add the following code to the file:

      ~/node_project/nodeapp/templates/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: {{ .Release.Name }}-auth
      data:
        MONGO_USERNAME: your_encoded_username
        MONGO_PASSWORD: your_encoded_password
      

      The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

      Save and close the file when you are finished.

      Next, open a file to create a ConfigMap for your application:

      • nano nodeapp/templates/configmap.yaml

      In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

      According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

      Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

      ~/node_project/nodeapp/templates/configmap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: {{ .Release.Name }}-config
      data:
        MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
        MONGO_PORT: "27017"
        MONGO_DB: "sharkinfo"
        MONGO_REPLICASET: "db"
      

      Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

      Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

      Save and close the file when you are finished editing.

      With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

      Step 5 — Integrating Environment Variables into Your Helm Deployment

      With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

      Open the application Deployment template for editing:

      • nano nodeapp/templates/deployment.yaml

      Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

      In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              ports:
      

      Next, add the following keys to the list of env variables:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
            - name: {{ .Chart.Name }}
              image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
              imagePullPolicy: {{ .Values.image.pullPolicy }}
              env:
              - name: MONGO_USERNAME
                valueFrom:
                  secretKeyRef:
                    key: MONGO_USERNAME
                    name: {{ .Release.Name }}-auth
              - name: MONGO_PASSWORD
                valueFrom:
                  secretKeyRef:
                    key: MONGO_PASSWORD
                    name: {{ .Release.Name }}-auth
              - name: MONGO_HOSTNAME
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_HOSTNAME
                    name: {{ .Release.Name }}-config
              - name: MONGO_PORT
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_PORT
                    name: {{ .Release.Name }}-config
              - name: MONGO_DB
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_DB
                    name: {{ .Release.Name }}-config      
              - name: MONGO_REPLICASET
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_REPLICASET
                    name: {{ .Release.Name }}-config        
      

      Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

      Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            ...
      

      Next, let's modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

      • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.
      • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

      For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

      In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod's container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

      Add the following modification to the stated path for the liveness and readiness probes:

      ~/node_project/nodeapp/templates/deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
      ...
        spec:
          containers:
          ...
            env:
          ...
            ports:
              - name: http
                containerPort: 8080
                protocol: TCP
            livenessProbe:
              httpGet:
                path: /sharks
                port: http
            readinessProbe:
              httpGet:
                path: /sharks
                port: http
      

      Save and close the file when you are finished editing.

      You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

      • helm install --name nodejs ./nodeapp

      Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

      Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

      You will see the following output indicating that your release has been created:

      Output

      NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

      Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

      Check the status of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

      Once your Pods are up and running, check your Services:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer 10.245.33.46 your_lb_ip 80:31518/TCP 3m22s

      The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

      You should see the following landing page:

      Application Landing Page

      Now that your replicated application is working, let's add some test data to ensure that replication is working between members of the replica set.

      Step 6 — Testing MongoDB Replication

      With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

      First, make sure you have navigated your browser to the application landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      Now head back to the shark information form by clicking on Sharks in the top navigation bar:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

      Complete Shark Collection

      Let's check that the data we've entered has been replicated between the primary and secondary members of our replica set.

      Get a list of your Pods:

      Output

      NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

      To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

      When prompted, enter the password associated with this username:

      Output

      MongoDB shell version v4.1.9 Enter password:

      You will be dropped into an administrative shell:

      Output

      MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

      Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

      You will see output like the following, indicating the hostname of the primary:

      Output

      db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

      Next, switch to your sharkinfo database:

      Output

      switched to db sharkinfo

      List the collections in the database:

      Output

      sharks

      Output the documents in the collection:

      You will see the following output:

      Output

      { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      Exit the MongoDB Shell:

      Now that we have checked the data on our primary, let's check that it's being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

      • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

      Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

      Switch to the sharkinfo database:

      Output

      switched to db sharkinfo

      Permit the read operation of the documents in the sharks collection:

      Output the documents in the collection:

      You should now see the same information that you saw when running this method on your primary instance:

      Output

      db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

      This output confirms that your application data is being replicated between the members of your replica set.

      Conclusion

      You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm's stable repository and other chart repositories.

      As you move toward production, consider implementing the following:

      To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.



      Source link