One place for hosting & domains

      A Beginner's Guide to LXD: Setting Up an Apache Webserver In a Container


      Updated by Linode Contributed by Simos Xenitellis

      Access an Apache Web Server Inside a LXD Container

      What is LXD?

      LXD (pronounced “Lex-Dee”) is a system container manager build on top of LXC (Linux Containers) that is currently supported by Canonical. The goal of LXD is to provide an experience similar to a virtual machine but through containerization rather than hardware virtualization. Compared to Docker for delivering applications, LXD offers nearly full operating-system functionality with additional features such as snapshots, live migrations, and storage management.

      The main benefits of LXD are the support of high density containers and the performance it delivers compared to virtual machines. A computer with 2GB RAM can adequately support half a dozen containers. In addition, LXD officially supports the container images of major Linux distributions. We can choose the Linux distribution and version to run in the container.

      This guide covers how to install and setup LXD 3 on a Linode and how to setup an Apache Web server in a container.

      Note

      For simplicity, the term container is used throughout this guide to describe the LXD system containers.

      Before You Begin

      1. Complete the Getting Started guide. Select a Linode with at least 2GB of RAM memory, such as the Linode 2GB. Specify the Ubuntu 19.04 distribution. You may specify a different Linux distribution, as long as there is support for snap packages (snapd); see the More Information for more details.

      2. This guide will use sudo wherever necessary. Follow the Securing Your Server guide to create a limited (non-root) user account, harden SSH access, and remove unnecessary network services.

      3. Update your system:

        sudo apt update && sudo apt upgrade
        

      Configure the Snap Package Support

      LXD is available as a Debian package in the long-term support (LTS) versions of Ubuntu, such as Ubuntu 18.04 LTS. For other versions of Ubuntu and other distributions, LXD is available as a snap package. Snap packages are universal packages because there is a single package file that works on any supported Linux distributions. See the More Information section for more details on what a snap package is, what Linux distributions are supported, and how to set it up.

      1. Verify that snap support is installed correctly. The following command either shows that there are no snap packages installed, or that some are.

        snap list
        
          
        No snaps are installed yet. Try 'snap install hello-world'.
        
        
      2. View the details of the LXD snap package lxd. The output below shows that, currently, the latest version of LXD is 3.12 in the default stable channel. This channel is updated often with new features. There are also other channels such as the 3.0/stable channel which has the LTS LXD version (supported along with Ubuntu 18.04, until 2023) and the 2.0/stable channel (supported along with Ubuntu 16.04, until 2021). We will be using the latest version of LXD from the default stable channel.

        snap info lxd
        
          
        name:      lxd
        summary:   System container manager and API
        publisher: Canonical✓
        contact:   https://github.com/lxc/lxd/issues
        license:   Apache-2.0
        description: |
          **LXD is a system container manager**
        
          With LXD you can run hundreds of containers of a variety of Linux
          distributions, apply resource limits, pass in directories, USB devices
          or GPUs and setup any network and storage you want.
        
          LXD containers are lightweight, secure by default and a great
          alternative to running Linux virtual machines.
        
        
          **Run any Linux distribution you want**
        
          Pre-made images are available for Ubuntu, Alpine Linux, ArchLinux,
          CentOS, Debian, Fedora, Gentoo, OpenSUSE and more.
        
          A full list of available images can be [found
          here](https://images.linuxcontainers.org)
        
          Can't find the distribution you want? It's easy to make your own images
          too, either using our `distrobuilder` tool or by assembling your own image
          tarball by hand.
        
        
          **Containers at scale**
        
          LXD is network aware and all interactions go through a simple REST API,
          making it possible to remotely interact with containers on remote
          systems, copying and moving them as you wish.
        
          Want to go big? LXD also has built-in clustering support,
          letting you turn dozens of servers into one big LXD server.
        
        
          **Configuration options**
        
          Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
           - criu.enable: Enable experimental live-migration support [default=false]
           - daemon.debug: Increases logging to debug level [default=false]
           - daemon.group: Group of users that can interact with LXD [default=lxd]
           - ceph.builtin: Use snap-specific ceph configuration [default=false]
           - openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
        
          [Documentation](https://lxd.readthedocs.io)
        snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
        channels:
          stable:        3.12        2019-04-16 (10601) 56MB -
          candidate:     3.12        2019-04-26 (10655) 56MB -
          beta:          ↑
          edge:          git-570aaa1 2019-04-27 (10674) 56MB -
          3.0/stable:    3.0.3       2018-11-26  (9663) 53MB -
          3.0/candidate: 3.0.3       2019-01-19  (9942) 53MB -
          3.0/beta:      ↑
          3.0/edge:      git-eaa62ce 2019-02-19 (10212) 53MB -
          2.0/stable:    2.0.11      2018-07-30  (8023) 28MB -
          2.0/candidate: 2.0.11      2018-07-27  (8023) 28MB -
          2.0/beta:      ↑
          2.0/edge:      git-c7c4cc8 2018-10-19  (9257) 26MB -
        
        
      3. Install the lxd snap package. Run the following command to install the snap package for LXD.

        sudo snap install lxd
        
          
        lxd 3.12 from Canonical✓ installed
        
        

      You can verify that the snap package has been installed by running snap list again. The core snap package is a prerequisite for any system with snap package support. When you install your first snap package, core is installed and shared among all other snap packages that will get installed in the future.

          snap list
      
      
        
      Name  Version  Rev    Tracking  Publisher   Notes
      core  16-2.38  6673   stable    canonical✓  core
      lxd   3.12     10601  stable    canonical✓  -
      
      

      Initialize LXD

      1. Add your non-root Unix user to the lxd group:

        sudo usermod -a -G lxd username
        

        Note

        By adding the non-root Unix user account to the lxd group, you are able to run any lxc commands without prepending sudo. Without this addition, you would have needed to prepend sudo to each lxc command.

      2. Start a new SSH session for the previous change to take effect. For example, log out and log in again.

      3. Verify the available free disk space:

        df -h /
        
          
        Filesystem      Size  Used Avail Use% Mounted on
        /dev/sda         49G  2.0G   45G   5% /
        
        

        In this case there is 45GB of free disk space. LXD requires at least 15GB of space for the storage needs of containers. We will allocate 15GB of space for LXD, leaving 30GB of free space for the needs of the server.

      4. Run lxd init to initialize LXD:

        sudo lxd init
        

        You will be prompted several times during the initialization process. Choose the defaults for all options.

          
        Would you like to use LXD clustering? (yes/no) [default=no]:
        Do you want to configure a new storage pool? (yes/no) [default=yes]:
        Name of the new storage pool [default=default]:
        Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
        Create a new ZFS pool? (yes/no) [default=yes]:
        Would you like to use an existing block device? (yes/no) [default=no]:
        Size in GB of the new loop device (1GB minimum) [default=15GB]:
        Would you like to connect to a MAAS server? (yes/no) [default=no]:
        Would you like to create a new local network bridge? (yes/no) [default=yes]:
        What should the new bridge be called? [default=lxdbr0]:
        What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        Would you like LXD to be available over the network? (yes/no) [default=no]:
        Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
        Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
        
        

      Apache Web Server with LXD

      This section will create a container, install the Apache web server, and add the appropriate iptables rules in order to expose post 80.

      1. Launch a new container:

        lxc launch ubuntu:18.04 web
        
      2. Update the package list in the container.

        lxc exec web -- apt update
        
      3. Install the Apache in the LXD container.

        lxc exec web -- apt install apache2
        
      4. Get a shell in the LXD container.

        lxc exec web -- sudo --user ubuntu --login
        
      5. Edit the default web page for Apache to make a reference that it runs inside a LXD container.

        sudo nano /var/www/html/index.html
        

        Change the line It works! (line number 224) to It works inside a LXD container!. Then, save and exit.

      6. Exit back to the host. We have made all the necessary changes to the container.

        exit
        
      7. Add a LXD proxy device to redirect connections from the internet to port 80 (HTTP) on the server to port 80 at this container.

        sudo lxc config device add web myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
        

        Note

        In recent versions of LXD, you need to specify an IP address (such as 127.0.0.1) instead of a hostname (such as localhost). If your container already has a proxy device that uses hostnames, you can edit the container configuration to replace with IP addresses by running lxc config edit web.

      8. From your local computer, navigate to your Linode’s public IP address in a web browser. You should see the default Apache page:

        Web page of Apache server running in a container

      Common LXD Commands

      • List all containers:

        lxc list
        
          
        To start your first container, try: lxc launch ubuntu:18.04
        
        +------+-------+------+------+------+-----------+
        | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
        +------+-------+------+------+------+-----------+
        
        
      • List all available repositories of container images:

        lxc remote list
        
          
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        |      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | local (default) | unix://                                  | lxd           | file access | NO     | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        
        

        The repository ubuntu has container images of Ubuntu versions. The images repository has container images of a large number of different Linux distributions. The ubuntu-daily has daily container images to be used for testing purposes. The local repository is the LXD server that we have just installed. It is not public and can be used to store your own container images.

      • List all available container images from a repository:

        lxc image list ubuntu:
        
          
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        |      ALIAS       | FINGERPRINT  | PUBLIC |                  DESCRIPTION                  |  ARCH   |   SIZE   |          UPLOAD DATE          |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | b (11 more)      | 5b72cf46f628 | yes    | ubuntu 18.04 LTS amd64 (release) (20190424)   | x86_64  | 180.37MB | Apr 24, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | c (5 more)       | 4716703f04fc | yes    | ubuntu 18.10 amd64 (release) (20190402)       | x86_64  | 313.29MB | Apr 2, 2019 at 12:00am (UTC)  |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | d (5 more)       | faef94acf5f9 | yes    | ubuntu 19.04 amd64 (release) (20190417)       | x86_64  | 322.56MB | Apr 17, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        .....................................................................
        
        

        Note

        The first two columns for the alias and fingerprint provide an identifier that can be used to specify the container image when launching it.

        The output snippet shows the container images Ubuntu versions 18.04 LTS, 18.10, and 19.04. When creating a container we can just specify the short alias. For example, ubuntu:b means that the repository is ubuntu and the container image has the short alias b (for bionic, the codename of Ubuntu 18.04 LTS).

      • Get more information about a container image:

        lxc image info ubuntu:b
        
          
        Fingerprint: 5b72cf46f628b3d60f5d99af48633539b2916993c80fc5a2323d7d841f66afbe
        Size: 180.37MB
        Architecture: x86_64
        Public: yes
        Timestamps:
            Created: 2019/04/24 00:00 UTC
            Uploaded: 2019/04/24 00:00 UTC
            Expires: 2023/04/26 00:00 UTC
            Last used: never
        Properties:
            release: bionic
            version: 18.04
            architecture: amd64
            label: release
            serial: 20190424
            description: ubuntu 18.04 LTS amd64 (release) (20190424)
            os: ubuntu
        Aliases:
            - 18.04
            - 18.04/amd64
            - b
            - b/amd64
            - bionic
            - bionic/amd64
            - default
            - default/amd64
            - lts
            - lts/amd64
            - ubuntu
            - amd64
        Cached: no
        Auto update: disabled
        
        

        The output shows the details of the container image including all the available aliases. For Ubuntu 18.04 LTS, we can specify either b (for bionic, the codename of Ubuntu 18.04 LTS) or any other alias.

      • Launch a new container with the name mycontainer:

        lxc launch ubuntu:18.04 mycontainer
        
          
        Creating mycontainer
        Starting mycontainer
        
        
      • Check the list of containers to make sure the new container is running:

        lxc list
        
          
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        |    NAME     |  STATE  |         IPV4          |          IPV6             |    TYPE    | SNAPSHOTS |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        | mycontainer | RUNNING | 10.142.148.244 (eth0) | fde5:5d27:...:1371 (eth0) | PERSISTENT | 0         |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        
        
      • Execute basic commands in mycontainer:

        lxc exec mycontainer -- apt update
        lxc exec mycontainer -- apt upgrade
        

        Note

        The characters -- instruct the lxc command not to parse any more command-line parameters.

      • Open a shell session within mycontainer:

        lxc exec mycontainer -- sudo --login --user ubuntu
        
          
        To run a command as administrator (user "root"), use "sudo ".
        See "man sudo_root" for details.
        
        ubuntu@mycontainer:~$
        
        

        Note

        The Ubuntu container images have by default a non-root account with username ubuntu. This account can use sudo and does not require a password to perform administrative tasks.

        The sudo command provides a login to the existing account ubuntu.

      • View the container logs:

        lxc info mycontainer --show-log
        
      • Stop the container:

        lxc stop mycontainer
        
      • Remove the container:

        lxc delete mycontainer
        

        Note

        A container needs to be stopped before it can be deleted.

      Troubleshooting

      Error “unix.socket: connect: connection refused”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused
      
      

      This happens when the LXD service is not currently running. By default, the LXD service is running as soon as it is configured successfully. See Initialize LXD to configure LXD.

      Error “unix.socket: connect: permission denied”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
      
      

      This happens when your limited user account is not a member of the lxd group, or you did not log out and log in again so that the new group membership to the lxd group gets updated.

      If your user account is ubuntu, the following command shows whether you are a member of the lxd group:

          groups ubuntu
      
      
        
      ubuntu : ubuntu sudo lxd
      
      

      In this example, we are members of the lxd group and we just need to log out and log in again. If you are not a member of the lxd group, see Initialize LXD on how to make your limited account a member of the lxd group.

      Next Steps

      If you plan to use a single website, then a single proxy device to the website container will suffice. If you plan to use multiple websites, you may install virtual hosts inside the website container. If instead you would like to setup multiple websites on their own container, then you will need to set up a reverse proxy in a container. In that case, the proxy device would direct to the reverse proxy container to direct the connections to the individual websites containers.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How to Use Ansible: A Reference Guide


      Ansible Cheat Sheet

      Introduction

      Ansible is a modern configuration management tool that facilitates the task of setting up and maintaining remote servers.

      This cheat sheet-style guide provides a quick reference to commands and practices commonly used when working with Ansible. For an overview of Ansible and how to install and configure it, please check our guide on how to install and configure Ansible on Ubuntu 18.04.

      How to Use This Guide:

      • This guide is in cheat sheet format with self-contained command-line snippets.
      • Jump to any section that is relevant to the task you are trying to complete.
      • When you see highlighted text in this guide’s commands, keep in mind that this text should refer to hosts, usernames and IP addresses from your own inventory.

      Ansible Glossary

      The following Ansible-specific terms are largely used throughout this guide:

      • Control Machine / Node: a system where Ansible is installed and configured to connect and execute commands on nodes.
      • Node: a server controlled by Ansible.
      • Inventory File: a file that contains information about the servers Ansible controls, typically located at /etc/ansible/hosts.
      • Playbook: a file containing a series of tasks to be executed on a remote server.
      • Role: a collection of playbooks and other files that are relevant to a goal such as installing a web server.
      • Play: a full Ansible run. A play can have several playbooks and roles, included from a single playbook that acts as entry point.

      If you’d like to practice the commands used in this guide with a working Ansible playbook, you can use this playbook from our guide on Automating Initial Server Setup with Ansible on Ubuntu 18.04. You’ll need at least one remote server to use as node.

      Testing Connectivity to Nodes

      To test that Ansible is able to connect and run commands and playbooks on your nodes, you can use the following command:

      The ping module will test if you have valid credentials for connecting to the nodes defined in your inventory file, in addition to testing if Ansible is able to run Python scripts on the remote server. A pong reply back means Ansible is ready to run commands and playbooks on that node.

      Connecting as a Different User

      By default, Ansible tries to connect to the nodes as your current system user, using its corresponding SSH keypair. To connect as a different user, append the command with the -u flag and the name of the intended user:

      • ansible all -m ping -u sammy

      The same is valid for ansible-playbook:

      • ansible-playbook myplaybook.yml -u sammy

      Using a Custom SSH Key

      If you're using a custom SSH key to connect to the remote servers, you can provide it at execution time with the --private-key option:

      • ansible all -m ping --private-key=~/.ssh/custom_id

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --private-key=~/.ssh/custom_id

      Using Password-Based Authentication

      If you need to use password-based authentication in order to connect to the nodes, you need to append the option --ask-pass to your Ansible command.

      This will make Ansible prompt you for the password of the user on the remote server that you're attempting to connect as:

      • ansible all -m ping --ask-pass

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --ask-pass

      Providing the sudo Password

      If the remote user needs to provide a password in order to run sudo commands, you can include the option --ask-become-pass to your Ansible command. This will prompt you to provide the remote user sudo password:

      • ansible all -m ping --ask-become-pass

      This option is also valid for ansible-playbook:

      • ansible-playbook myplaybook.yml --ask-become-pass

      Using a Custom Inventory File

      The default inventory file is typically located at /etc/ansible/hosts, but you can also use the -i option to point to custom inventory files when running Ansible commands and playbooks. This is useful for setting up per-project inventories that can be included in version control systems such as Git:

      • ansible all -m ping -i my_custom_inventory

      The same option is valid for ansible-playbook:

      • ansible-playbook myplaybook.yml -i my_custom_inventory

      Using a Dynamic Inventory File

      Ansible supports inventory scripts for building dynamic inventory files. This is useful if your inventory fluctuates, with servers being created and destroyed often.

      You can find a number of open source inventory scripts on the official Ansible GitHub repository. After downloading the desired script to your Ansible control machine and setting up any required information — such as API credentials — you can use the executable as custom inventory with any Ansible command that supports this option.

      The following command uses Ansible's DigitalOcean inventory script with a ping command to check connectivity to all current active servers:

      • ansible all -m ping -i digital_ocean.py

      For more details on how to use dynamic inventory files, please refer to the official Ansible documentation.

      Running ad-hoc Commands

      To execute any command on a node, use the -a option followed by the command you want to run, in quotes.

      This will execute uname -a on all the nodes in your inventory:

      • ansible all -a "uname -a"

      It is also possible to run Ansible modules with the option -m. The following command would install the package vim on server1 from your inventory:

      • ansible server1 -m apt -a "name=vim"

      Before making changes to your nodes, you can conduct a dry run to predict how the servers would be affected by your command. This can be done by including the --check option:

      • ansible server1 -m apt -a "name=vim" --check

      Running Playbooks

      To run a playbook and execute all the tasks defined within it, use the ansible-playbook command:

      • ansible-playbook myplaybook.yml

      To overwrite the default hosts option in the playbook and limit execution to a certain group or host, include the option -l in your command:

      • ansible-playbook -l server1 myplaybook.yml

      Getting Information about a Play

      The option --list-tasks is used to list all tasks that would be executed by a play without making any changes to the remote servers:

      • ansible-playbook myplaybook.yml --list-tasks

      Similarly, it is possible to list all hosts that would be affected by a play, without running any tasks on the remote servers:

      • ansible-playbook myplaybook.yml --list-hosts

      You can use tags to limit the execution of a play. To list all tags available in a play, use the option --list-tags:

      • ansible-playbook myplaybook.yml --list-tags

      Controlling Playbook Execution

      You can use the option --start-at-task to define a new entry point for your playbook. Ansible will then skip anything that comes before the specified task, executing the remaining of the play from that point on. This option requires a valid task name as argument:

      • ansible-playbook myplaybook.yml --start-at-task="Set Up Nginx"

      To only execute tasks associated with specific tags, you can use the option --tags. For instance, if you'd like to only execute tasks tagged as nginx or mysql, you can use:

      • ansible-playbook myplaybook.yml --tags=mysql,nginx

      If you want to skip all tasks that are under specific tags, use --skip-tags. The following command would execute myplaybook.yml, skipping all tasks tagged as mysql:

      • ansible-playbook myplaybook.yml --skip-tags=mysql

      Using Ansible Vault to Store Sensitive Data

      If your Ansible playbooks deal with sensitive data like passwords, API keys, and credentials, it is important to keep that data safe by using an encryption mechanism. Ansible provides ansible-vault to encrypt files and variables.

      Even though it is possible to encrypt any Ansible data file as well as binary files, it is more common to use ansible-vault to encrypt variable files containing sensitive data. After encrypting a file with this tool, you'll only be able to execute, edit or view its contents by providing the relevant password defined when you first encrypted the file.

      Creating a New Encrypted File

      You can create a new encrypted Ansible file with:

      • ansible-vault create credentials.yml

      This command will perform the following actions:

      • First, it will prompt you to enter a new password. You'll need to provide this password whenever you access the file contents, whether it's for editing, viewing, or just running playbooks or commands using those values.
      • Next, it will open your default command-line editor so you can populate the file with the desired contents.
      • Finally, when you're done editing, ansible-vault will save the file as encrypted data.

      Encrypting an Existing Ansible File

      To encrypt an existing Ansible file, you can use the following syntax:

      • ansible-vault encrypt credentials.yml

      This will prompt you for a password that you'll need to enter whenever you access the file credentials.yml.

      Viewing the Contents of an Encrypted File

      If you want to view the contents of a file that was previously encrypted with ansible-vault and you don't need to change its contents, you can use:

      • ansible-vault view credentials.yml

      This will prompt you to provide the password you selected when you first encrypted the file with ansible-vault.

      Editing an Encrypted File

      To edit the contents of a file that was previously encrypted with Ansible Vault, run:

      • ansible-vault edit credentials.yml

      This will prompt you to provide the password you chose when first encrypting the file credentials.yml with ansible-vault. After password validation, your default command-line editor will open with the unencrypted contents of the file, allowing you to make your changes. When finished, you can save and close the file as you would normally, and the updated contents will be saved as encrypted data.

      Decrypting Encrypted Files

      If you wish to permanently revert a file that was previously encrypted with ansible-vault to its unencrypted version, you can do so with this syntax:

      • ansible-vault decrypt credentials.yml

      This will prompt you to provide the same password used when first encrypting the file credentials.yml with ansible-vault. After password validation, the file contents will be saved to the disk as unencrypted data.

      Using Multiple Vault Passwords

      Ansible supports multiple vault passwords grouped by different vault IDs. This is useful if you want to have dedicated vault passwords for different environments, such as development, testing, and production environments.

      To create a new encrypted file using a custom vault ID, include the --vault-id option along with a label and the location where ansible-vault can find the password for that vault. The label can be any identifier, and the location can either be prompt, meaning that the command should prompt you to enter a password, or a valid path to a password file.

      • ansible-vault create --vault-id dev@prompt credentials_dev.yml

      This will create a new vault ID named dev that uses prompt as password source. By combining this method with group variable files, you'll be able to have separate ansible vaults for each application environment:

      • ansible-vault create --vault-id prod@prompt credentials_prod.yml

      We used dev and prod as vault IDs to demonstrate how you can create separate vaults per environment, but you can create as many vaults as you want, and you can use any identifier of your choice as vault ID.

      Now to view, edit, or decrypt these files, you'll need to provide the same vault ID and password source along with the ansible-vault command:

      • ansible-vault edit credentials_dev.yml --vault-id dev@prompt

      Using a Password File

      If you need to automate the process of provisioning servers with Ansible using a third-party tool, you'll need a way to provide the vault password without being prompted for it. You can do that by using a password file with ansible-vault.

      A password file can be a plain text file or an executable script. If the file is an executable script, the output produced by this script will be used as the vault password. Otherwise, the raw contents of the file will be used as vault password.

      To use a password file with ansible-vault, you need to provide the path to a password file when running any of the vault commands:

      • ansible-vault create --vault-id dev@path/to/passfile credentials_dev.yml

      Ansible doesn't make a distinction between content that was encrypted using prompt or a password file as password source, as long as the input password is the same. In practical terms, this means it is OK to encrypt a file using prompt and then later use a password file to store the same password used with the prompt method. The opposite is also true: you can encrypt content using a password file and later use the prompt method, providing the same password when prompted by Ansible.

      For extended flexibility and security, instead of having your vault password stored in a plain text file, you can use a Python script to obtain the password from other sources. The official Ansible repository contains a few examples of vault scripts that you can use for reference when creating a custom script that suits the particular needs of your project.

      Running a Playbook with Data Encrypted via Ansible Vault

      Whenever you run a playbook that uses data previously encrypted via ansible-vault, you'll need to provide the vault password to your playbook command.

      If you used default options and the prompt password source when encrypting the data used in this playbook, you can use the option --ask-vault-pass to make Ansible prompt you for the password:

      • ansible-playbook myplaybook.yml --ask-vault-pass

      If you used a password file instead of prompting for the password, you should use the option --vault-password-file instead:

      • ansible-playbook myplaybook.yml --vault-password-file my_vault_password.py

      If you're using data encrypted under a vault ID, you'll need to provide the same vault ID and password source you used when first encrypting the data:

      • ansible-playbook myplaybook.yml --vault-id dev@prompt

      If using a password file with your vault ID, you should provide the label followed by the full path to the password file as password source:

      • ansible-playbook myplaybook.yml --vault-id dev@vault_password.py

      If your play uses multiple vaults, you should provide a --vault-id parameter for each of them, in no particular order:

      • ansible-playbook myplaybook.yml --vault-id dev@vault_password.py --vault-id test@prompt --vault-id ci@prompt

      Debugging

      If you run into errors while executing Ansible commands and playbooks, it's a good idea to increase output verbosity in order to get more information about the problem. You can do that by including the -v option to the command:

      • ansible-playbook myplaybook.yml -v

      If you need more detail, you can use -vvv and this will increase verbosity of the output. If you're unable to connect to the remote nodes via Ansible, use -vvvv to get connection debugging information:

      • ansible-playbook myplaybook.yml -vvvv

      Conclusion

      This guide covers some of the most common Ansible commands you may use when provisioning servers, such as how to execute remote commands on your nodes and how to run playbooks using a variety of custom settings.

      There are other command variations and flags that you may find useful for your Ansible workflow. To get an overview of all available options, you can use the help command:

      If you want a more comprehensive view of Ansible and all its available commands and features, please refer to the official Ansible documentation.



      Source link

      A Beginner's Guide to Kubernetes


      Updated by Linode Contributed by Linode

      Kubernetes, often referred to as k8s, is an open source container orchestration system that helps deploy and manage containerized applications. Developed by Google starting in 2014 and written in the Go language, Kubernetes is quickly becoming the standard way to architect horizontally-scalable applications. This guide will explain the major parts and concepts of Kubernetes.

      Containers

      Kubernetes is a container orchestration tool and, therefore, needs a container runtime installed to work. In practice, the default container runtime for Kubernetes is Docker, though other runtimes like rkt, and LXD will also work. With the advent of the Container Runtime Interface (CRI), which hopes to standardize the way Kubernetes interacts with containers, other options like containerd, cri-o, and Frakti have also become available. This guide assumes you have a working knowledge of containers and the examples will all use Docker as the container runtime.

      Kubernetes API

      Kubernetes is built around a robust RESTful API. Every action taken in Kubernetes, be it inter-component communication or user command, interacts in some fashion with the Kubernetes API. The goal of the API is to help facilitate the desired state of the Kubernetes cluster. If you want X instances of your application running and have Y currently active, the API will take the required steps to get to X, whether this means creating, or destroying resources. To create this desired state, you create objects, which are normally represented by YAML files called manifests, and apply them through the command line with the kubectl tool.

      kubectl

      kubectl is a command line tool used to interact with the Kubernetes cluster. It offers a host of features, including the ability to create, stop, and delete resources, describe active resources, and auto scale resources. For more information on the types of commands and resources you can use with kubectl, consult the Kubernetes kubectl documentation.

      Kubernetes Master, Nodes, and Control Plane

      At the highest level of Kubernetes, there exist two kinds of servers, a Master and a Node. These servers can be Linodes, VMs, or physical servers. Together, these servers form a cluster.

      Nodes

      Kubernetes Nodes are worker servers that run your application. The number of Nodes is determined by the user, and they are created by the user. In addition to running your application, each Node runs two processes:

      • kubelet receives descriptions of the desired state of a Pod from the API server, and ensures the Pod is healthy, and running on the Node.
      • kube-proxy is a networking proxy that proxies the UDP, TCP, and SCTP networking of each Node, and provides load balancing. This is only used to connect to Services.

      Kubernetes Master

      The Kubernetes Master is normally a separate server responsible for maintaining the desired state of the cluster. It does this by telling the Nodes how many instances of your application it should run and where. The Kubernetes Master runs three processes:

      • kube-apiserver is the front end for the Kubernetes API server.
      • kube-controller-manager is a daemon that manages the Kubernetes control loop. For more on Controllers, see the Controllers section.
      • kube-scheduler is a function that looks for newly created Pods that have no Nodes, and assigns them a Node based on a host of requirements. For more information on kube-scheduler, consult the Kubernetes kube-scheduler documentation.

      Additionally, the Kubernetes Master runs the database etcd. Etcd is a highly available key-value store that provides the backend database for Kubernetes.

      Together, kube-apiserver, kube-controller-manager, kube-scheduler, and etcd form what is known as the control plane. The control plane is responsible for making decisions about the cluster, and pushing it toward the desired state.

      Kubernetes Objects

      In Kubernetes, there are a number of objects that are abstractions of your Kubernetes system’s desired state. These objects represent your application, its networking, and disk resources – all of which together form your application.

      Pods

      In Kubernetes, all containers exist within Pods. Pods are the smallest unit of the Kubernetes architecture, and can be viewed as a kind of wrapper for your container. Each Pod is given its own IP address with which it can interact with other Pods within the cluster.

      Usually, a Pod contains only one container, but a Pod can contain multiple containers if those containers need to share resources. If there is more than one container in a Pod, these containers can communicate with one another via localhost.

      Pods in Kubernetes are “mortal,” which means that they are created, and destroyed depending on the needs of the application. For instance, you might have a web app backend that sees a spike in CPU usage. This might cause the cluster to scale up the amount of backend Pods from two to ten, in which case eight new Pods would be created. Once the traffic subsides, the Pods might scale back to two, in which case eight pods would be destroyed.

      It is important to note that Pods are destroyed without respect to which Pod was created first. And, while each Pod has its own IP address, this IP address will only be available for the life-cycle of the Pod.

      Below is an example of a Pod manifest:

      my-apache-pod.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      
      apiVersion: v1
      kind: Pod
      metadata:
       name: apache-pod
       labels:
         app: web
      spec:
        containers:
        - name: apache-container
          image: httpd

      Each manifest has four necessary parts:

      • The version of the API in use
      • The kind of resource you’d like to define
      • Metadata about the resource
      • Though not required by all objects, a spec which describes the desired behavior of the resource is necessary for most objects and controllers.

      In the case of this example, the API in use is v1, and the kind is a Pod. The metadata field is used for applying a name, labels, and annotations. Names are used to differentiate resources, while labels are used to group like resources. Labels will come into play more when defining Services and Deployments. Annotations are for attaching arbitrary data to the resource.

      The spec is where the desired state of the resource is defined. In this case, a Pod with a single Apache container is desired, so the containers field is supplied with a name, ‘apache-container’, and an image, the latest version of Apache. The image is pulled from Docker Hub, as that is the default container registry for Kubernetes.

      For more information on the type of fields you can supply in a Pod manifest, refer to the Kubernetes Pod API documentation.

      Now that you have the manifest, you can create the Pod using the create command:

      kubectl create -f my-apache-pod.yaml
      

      To view a list of your pods, use the get pods command:

      kubectl get pods
      

      You should see output like the following:

      NAME         READY   STATUS    RESTARTS   AGE
      apache-pod   1/1     Running   0          16s
      

      To quickly view which Node the Pod exists on, issue the get pods command with the -o=wide flag:

      kubectl get pods -o=wide
      

      To retrieve information about the Pod, issue the describe command:

      kubcetl describe pod apache-pod
      

      You should see output like the following:

      ...
      Events:
      Type    Reason     Age    From                       Message
      ----    ------     ----   ----                       -------
      Normal  Scheduled  2m38s  default-scheduler          Successfully assigned default/apache-pod to mycluster-node-1
      Normal  Pulling    2m36s  kubelet, mycluster-node-1  pulling image "httpd"
      Normal  Pulled     2m23s  kubelet, mycluster-node-1  Successfully pulled image "httpd"
      Normal  Created    2m22s  kubelet, mycluster-node-1  Created container
      Normal  Started    2m22s  kubelet, mycluster-node-1  Started container
      

      To delete the Pod, issue the delete command:

      kubectl delete pod apache-pod
      

      Services

      Services group identical Pods together to provide a consistent means of accessing them. For instance, you might have three Pods that are all serving a website, and all of those Pods need to be accessible on port 80. A Service can ensure that all of the Pods are accessible at that port, and can load balance traffic between those Pods. Additionally, a Service can allow your application to be accessible from the internet. Each Service is given an IP address and a corresponding local DNS entry. Additionally, Services exist across Nodes. If you have two replica Pods on one Node and an additional replica Pod on another Node, the service can include all three Pods. There are four types of Service:

      • ClusterIP: Exposes the Service internally to the cluster. This is the default setting for a Service.
      • NodePort: Exposes the Service to the internet from the IP address of the Node at the specified port number. You can only use ports in the 30000-32767 range.
      • LoadBalancer: This will create a load balancer assigned to a fixed IP address in the cloud, so long as the cloud provider supports it. In the case of Linode, this is the responsibility of the Linode Cloud Controller Manager, which will create a NodeBalancer for the cluster. This is the best way to expose your cluster to the internet.
      • ExternalName: Maps the service to a DNS name by returning a CNAME record redirect. ExternalName is good for directing traffic to outside resources, such as a database that is hosted on another cloud.

      Below is an example of a Service manifest:

      my-apache-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: apache-service
        labels:
          app: web
      spec:
        type: NodePort
        ports:
        - port: 80
          targetPort: 80
          nodePort: 30020
        selector:
          app: web

      The above example Service uses the v1 API, and its kind is Service. Like the Pod example in the previous section, this manifest has a name and a label. Unlike the Pod example, this spec uses the ports field to define the exposed port on the container (port), and the target port on the Pod (targetPort). The type NodePort unlocks the use of nodePort field, which allows traffic on the host Node at that port. Lastly, the selector field is used to target only the Pods that have been assigned the app: web label.

      For more information on Services, visit the Kubernetes Service API documentation.

      To create the Service from the YAML file, issue the create command:

      kubectl create -f my-apache-service.yaml
      

      To view a list of running services, issue the get services command:

      kubectl get services
      

      You should see output like the following:

      NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
      apache-service   NodePort    10.99.57.13   <none>        80:30020/TCP   54s
      kubernetes       ClusterIP   10.96.0.1     <none>        443/TCP        46h
      

      To retrieve more information about your Service, issue the describe command:

      kubectl describe service apache-service
      

      To delete the Service, issue the delete command:

      kubcetl delete service apache-service
      

      Volumes

      A Volume in Kubernetes is a way to share file storage between containers in a Pod. Kubernetes Volumes differ from Docker volumes because they exist inside the Pod rather than inside the container. When a container is restarted the Volume persists. Note, however, that these Volumes are still tied to the lifecycle of the Pod, so if the Pod is destroyed the Volume will be destroyed with it.

      Linode also offers a Container Storage Interface (CSI) driver that allows the cluster to persist data on a Block Storage volume.

      Below is an example of how to create and use a Volume by creating a Pod manifest:

      my-apache-pod-with-volume.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      apiVersion: v1
      kind: Pod
      metadata:
        name: apache-with-volume
      spec:
        volumes:
        - name: apache-storage-volume
          emptyDir: {}
      
        containers:
        - name: apache-container
          image: httpd
          volumeMounts:
          - name: apache-storage-volume
            mountPath: /data/apache-data

      A Volume has two unique aspects to its definition. In this example, the first aspect is the volumes block that defines the type of Volume you want to create, which in this case is a simple empty directory (emptyDir). The second aspect is the volumeMounts field within the container’s spec. This field is given the name of the Volume you are creating and a mount path within the container.

      There are a number of different Volume types you could create in addition to emptyDir depending on your cloud host. For more information on Volume types, visit the Kubernetes Volumes API documentation.

      Namespaces

      Namespaces are virtual clusters that exist within the Kubernetes cluster that help to group and organize objects. Every cluster has at least three namespaces: default, kube-system, and kube-public. When interacting with the cluster it is important to know which Namespace the object you are looking for is in, as many commands will default to only showing you what exists in the default namespace. Resources created without an explicit namespace will be added to the default namespace.

      Namespaces consist of alphanumeric characters, dashes (-), and periods (.).

      Here is an example of how to define a Namespace with a manifest:

      my-namespace.yaml
      1
      2
      3
      4
      
      apiVersion: v1
      kind: Namespace
      metadata:
        name: my-app

      To create the Namespace, issue the create command:

      kubcetl create -f my-namespace.yaml
      

      Below is an example of a Pod with a Namespace:

      my-apache-pod-with-namespace.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      apiVersion: v1
      kind: Pod
      metadata:
        name: apache-pod
        labels:
          app: web
        namespace: my-app
      spec:
        containers:
        - name: apache-container
          image: httpd

      To retrieve resources in a certain Namespace, use the -n flag.

      kubectl get pods -n my-app
      

      You should see a list of Pods within your namespace:

      NAME         READY   STATUS    RESTARTS   AGE
      apache-pod   1/1     Running   0          7s
      

      To view Pods in all Namespaces, use the --all-namespaces flag.

      kubectl get pods --all-namespaces
      

      To delete a Namespace, issue the delete namespace command. Note that this will delete all resources within that Namespace:

      kubectl delete namespace my-app
      

      For more information on Namespaces, visit the Kubernetes Namespaces API documentation

      Controllers

      A Controller is a control loop that continuously watches the Kubernetes API and tries to manage the desired state of certain aspects of the cluster. There are a number of controllers. Below is a short reference of the most popular controllers you might interact with.

      ReplicaSets

      As has been mentioned, Kubernetes allows an application to scale horizontally. A ReplicaSet is one of the controllers responsible for keeping a given number of replica Pods running. If one Pod goes down in a ReplicaSet, another will be created to replace it. In this way, Kubernetes is self-healing. However, for most use cases it is recommended to use a Deployment instead of a ReplicaSet.

      Below is an example of a ReplicaSet:

      my-apache-replicaset.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: apps/v1
      kind: ReplicaSet
      metadata:
        name: apache-replicaset
        labels:
          app: web
      spec:
        replicas: 5
        selector:
          matchLabels:
            app: web
        template:
          metadata:
            labels:
              app: web
          spec:
            containers:
            - name: apache-container
              image: httpd

      There are three main things to note in this ReplicaSet. The first is the apiVersion, which is apps/v1. This differs from the previous examples, which were all apiVersion: v1, because ReplicaSets do not exist in the v1 core. They instead reside in the apps group of v1. The second and third things to note are the replicas field and the selector field. The replicas field defines how many replica Pods you want to be running at any given time. The selector field defines which Pods, matched by their label, will be controlled by the ReplicaSet.

      To view your ReplicaSets, issue the get replicasets command:

      kubectl get replicasets
      

      You should see output like the following:

      NAME                DESIRED   CURRENT   READY   AGE
      apache-replicaset   5         5         0       5s
      

      This output shows that of the five desired replicas, there are 5 currently active, but zero of those replicas are available. This is because the Pods are still booting up. If you issue the command again, you will see that all five have become ready:

      NAME                DESIRED   CURRENT   READY   AGE
      apache-replicaset   5         5         5       86s
      

      You can view the Pods the ReplicaSet created by issuing the get pods command:

      NAME                      READY   STATUS    RESTARTS   AGE
      apache-replicaset-5rsx2   1/1     Running   0          31s
      apache-replicaset-8n52c   1/1     Running   0          31s
      apache-replicaset-jcgn8   1/1     Running   0          31s
      apache-replicaset-sj422   1/1     Running   0          31s
      apache-replicaset-z8g76   1/1     Running   0          31s
      

      To delete a ReplicaSet, issue the delete replicaset command:

      kubectl delete replicaset apache-replicaset
      

      If you issue the get pods command, you will see that the Pods the ReplicaSet created are in the process of terminating:

      NAME                      READY   STATUS        RESTARTS   AGE
      

      apache-replicaset-bm2pn 0/1 Terminating 0 3m54s

      In the above example, four of the Pods have already terminated, and one is in the process of terminating.

      For more information on ReplicaSets, view the Kubernetes ReplicaSets API documentation.

      Deployments

      A Deployment can manage a ReplicaSet, so it shares the ability to keep a defined number of replica pods up and running. A Deployment can also update those Pods to resemble the desired state by means of rolling updates. For example, if you wanted to update a container image to a newer version, you would create a Deployment, and the controller would update the container images one by one until the desired state is achieved. This ensures that there is no downtime when updating or altering your Pods.

      Below is an example of a Deployment:

      my-apache-deployment.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: apache-deployment
        labels:
          app: web
      spec:
        replicas: 5
        selector:
          matchLabels:
            app: web
        template:
          metadata:
            labels:
              app: web
          spec:
            containers:
            - name: apache-container
              image: httpd:2.4.35

      The only noticeable difference between this Deployment and the example given in the ReplicaSet section is the kind. In this example we have chosen to initially install Apache 2.4.35. If you wanted to update that image to Apache 2.4.38, you would issue the following command:

      kubectl --record deployment.apps/apache-deployment set image deployment.v1.apps/apache-deployment apache-container=httpd:2.4.38
      

      You’ll see a confirmation that the images have been updated:

      deployment.apps/apache-deployment image updated
      

      To see for yourself that the images have updated, you can grab the Pod name from the get pods list:

      kubectl get pods
      
      NAME                                 READY   STATUS    RESTARTS   AGE
      apache-deployment-574c8c4874-8zwgl   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-9pr5j   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-fbs46   1/1     Running   0          8m34s
      apache-deployment-574c8c4874-nn7dl   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-pndgp   1/1     Running   0          8m33s
      

      Issue the describe command to view all of the available details of the Pod:

      kubectl describe pod apache-deployment-574c8c4874-pndgp
      

      You’ll see a long list of details, of which the container image is included:

      ....
      
      Containers:
        apache-container:
          Container ID:   docker://d7a65e7993ab5bae284f07f59c3ed422222100833b2769ff8ee14f9f384b7b94
          Image:          httpd:2.4.38
      
      ....
      

      For more information on Deployments, visit the Kubernetes Deployments API documentation

      Jobs

      A Job is a controller that manages a Pod that is created for a single, or set, of tasks. This is handy if you need to create a Pod that performs a single function, or calculates a value. The deletion of the Job will delete the Pod.

      Below is an example of a Job that simply prints “Hello World!” and ends:

      my-job.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      
      apiVersion: batch/v1
      kind: Job
      metadata:
        name: hello-world
      spec:
        template:
          metadata:
            name: hello-world
          spec:
            containers:
            - name: output
              image: debian
              command:
               - "bin/bash"
               - "-c"
               - "echo 'Hello World!'"
            restartPolicy: Never

      To create the Job, issue the create command:

      kubectl create -f my-job.yaml
      

      To see if the job has run, or is running, issue the get jobs command:

      kubectl get jobs
      

      You should see output like the following:

      NAME          COMPLETIONS   DURATION   AGE
      hello-world   1/1           9s         8m23s
      

      To get the Pod of the Job, issue the get pods command:

      kubectl get pods
      

      You should see an output like the following:

      NAME                               READY   STATUS             RESTARTS   AGE
      hello-world-4jzdm                  0/1     Completed          0          9m44s
      

      You can use the name of the Pod to inspect its output by consulting the log file for the Pod:

      kubectl get logs hello-world-4jzdm
      

      To delete the Job, and its Pod, issue the delete command:

      kubectl delete job hello-world
      

      Networking

      Networking in Kubernetes was designed to make it simple to port existing apps from VMs to containers, and subsequently, Pods. The basic requirements of the Kubernetes networking model are:

      1. Pods can communicate with each other across Nodes without the use of NAT
      2. Agents on a Node, like kubelet, can communicate with all of a Node’s Pods
      3. In the case of Linux, Pods in a Node’s host network can communicate to all other Pods without NAT.

      Though the rules of the Kubernetes networking model are simple, the implementation of those rules is an advanced topic. Because Kubernetes does not come with its own implementation, it is up to the user to provide a networking model.

      Two of the most popular options are Flannel and Calico. Flannel is a networking overlay that meets the functionality of the Kubernetes networking model by supplying a layer 3 network fabric, and is relatively easy to set up. Calico enables networking, and networking policy through the NetworkPolicy API to provide simple virtual networking.

      For more information on the Kubernetes networking model, and ways to implement it, consult the cluster networking documentation.

      Advanced Topics

      There are a number of advanced topics in Kubernetes. Below are a few you might find useful as you progress in Kubernetes:

      • StatefulSets can be used when creating stateful applications.
      • DaemonSets can be used to ensure each Node is running a certain Pod. This is useful for log collection, monitoring, and cluster storage.
      • Horizontal Pod Autoscaling can automatically scale your deployments based on CPU usage.
      • CronJobs can schedule Jobs to run at certain times.
      • ResourceQuotas are helpful when working with larger groups where there is a concern that some teams might take up too many resources.

      Next Steps

      Now that you are familiar with Kubernetes concepts and components, you can follow the Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode guide. This guide provides a hands-on activity to continue learning about Kubernetes. If you would like to deploy a Kubernetes cluster on Linode for production use, we recommend using one of the following methods, instead:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link