One place for hosting & domains

      Beginner039s

      A Beginner's Guide to LXD: Setting Up an Apache Webserver In a Container


      Updated by Linode Contributed by Simos Xenitellis

      Access an Apache Web Server Inside a LXD Container

      What is LXD?

      LXD (pronounced “Lex-Dee”) is a system container manager build on top of LXC (Linux Containers) that is currently supported by Canonical. The goal of LXD is to provide an experience similar to a virtual machine but through containerization rather than hardware virtualization. Compared to Docker for delivering applications, LXD offers nearly full operating-system functionality with additional features such as snapshots, live migrations, and storage management.

      The main benefits of LXD are the support of high density containers and the performance it delivers compared to virtual machines. A computer with 2GB RAM can adequately support half a dozen containers. In addition, LXD officially supports the container images of major Linux distributions. We can choose the Linux distribution and version to run in the container.

      This guide covers how to install and setup LXD 3 on a Linode and how to setup an Apache Web server in a container.

      Note

      For simplicity, the term container is used throughout this guide to describe the LXD system containers.

      Before You Begin

      1. Complete the Getting Started guide. Select a Linode with at least 2GB of RAM memory, such as the Linode 2GB. Specify the Ubuntu 19.04 distribution. You may specify a different Linux distribution, as long as there is support for snap packages (snapd); see the More Information for more details.

      2. This guide will use sudo wherever necessary. Follow the Securing Your Server guide to create a limited (non-root) user account, harden SSH access, and remove unnecessary network services.

      3. Update your system:

        sudo apt update && sudo apt upgrade
        

      Configure the Snap Package Support

      LXD is available as a Debian package in the long-term support (LTS) versions of Ubuntu, such as Ubuntu 18.04 LTS. For other versions of Ubuntu and other distributions, LXD is available as a snap package. Snap packages are universal packages because there is a single package file that works on any supported Linux distributions. See the More Information section for more details on what a snap package is, what Linux distributions are supported, and how to set it up.

      1. Verify that snap support is installed correctly. The following command either shows that there are no snap packages installed, or that some are.

        snap list
        
          
        No snaps are installed yet. Try 'snap install hello-world'.
        
        
      2. View the details of the LXD snap package lxd. The output below shows that, currently, the latest version of LXD is 3.12 in the default stable channel. This channel is updated often with new features. There are also other channels such as the 3.0/stable channel which has the LTS LXD version (supported along with Ubuntu 18.04, until 2023) and the 2.0/stable channel (supported along with Ubuntu 16.04, until 2021). We will be using the latest version of LXD from the default stable channel.

        snap info lxd
        
          
        name:      lxd
        summary:   System container manager and API
        publisher: Canonical✓
        contact:   https://github.com/lxc/lxd/issues
        license:   Apache-2.0
        description: |
          **LXD is a system container manager**
        
          With LXD you can run hundreds of containers of a variety of Linux
          distributions, apply resource limits, pass in directories, USB devices
          or GPUs and setup any network and storage you want.
        
          LXD containers are lightweight, secure by default and a great
          alternative to running Linux virtual machines.
        
        
          **Run any Linux distribution you want**
        
          Pre-made images are available for Ubuntu, Alpine Linux, ArchLinux,
          CentOS, Debian, Fedora, Gentoo, OpenSUSE and more.
        
          A full list of available images can be [found
          here](https://images.linuxcontainers.org)
        
          Can't find the distribution you want? It's easy to make your own images
          too, either using our `distrobuilder` tool or by assembling your own image
          tarball by hand.
        
        
          **Containers at scale**
        
          LXD is network aware and all interactions go through a simple REST API,
          making it possible to remotely interact with containers on remote
          systems, copying and moving them as you wish.
        
          Want to go big? LXD also has built-in clustering support,
          letting you turn dozens of servers into one big LXD server.
        
        
          **Configuration options**
        
          Supported options for the LXD snap (`snap set lxd KEY=VALUE`):
           - criu.enable: Enable experimental live-migration support [default=false]
           - daemon.debug: Increases logging to debug level [default=false]
           - daemon.group: Group of users that can interact with LXD [default=lxd]
           - ceph.builtin: Use snap-specific ceph configuration [default=false]
           - openvswitch.builtin: Run a snap-specific OVS daemon [default=false]
        
          [Documentation](https://lxd.readthedocs.io)
        snap-id: J60k4JY0HppjwOjW8dZdYc8obXKxujRu
        channels:
          stable:        3.12        2019-04-16 (10601) 56MB -
          candidate:     3.12        2019-04-26 (10655) 56MB -
          beta:          ↑
          edge:          git-570aaa1 2019-04-27 (10674) 56MB -
          3.0/stable:    3.0.3       2018-11-26  (9663) 53MB -
          3.0/candidate: 3.0.3       2019-01-19  (9942) 53MB -
          3.0/beta:      ↑
          3.0/edge:      git-eaa62ce 2019-02-19 (10212) 53MB -
          2.0/stable:    2.0.11      2018-07-30  (8023) 28MB -
          2.0/candidate: 2.0.11      2018-07-27  (8023) 28MB -
          2.0/beta:      ↑
          2.0/edge:      git-c7c4cc8 2018-10-19  (9257) 26MB -
        
        
      3. Install the lxd snap package. Run the following command to install the snap package for LXD.

        sudo snap install lxd
        
          
        lxd 3.12 from Canonical✓ installed
        
        

      You can verify that the snap package has been installed by running snap list again. The core snap package is a prerequisite for any system with snap package support. When you install your first snap package, core is installed and shared among all other snap packages that will get installed in the future.

          snap list
      
      
        
      Name  Version  Rev    Tracking  Publisher   Notes
      core  16-2.38  6673   stable    canonical✓  core
      lxd   3.12     10601  stable    canonical✓  -
      
      

      Initialize LXD

      1. Add your non-root Unix user to the lxd group:

        sudo usermod -a -G lxd username
        

        Note

        By adding the non-root Unix user account to the lxd group, you are able to run any lxc commands without prepending sudo. Without this addition, you would have needed to prepend sudo to each lxc command.

      2. Start a new SSH session for the previous change to take effect. For example, log out and log in again.

      3. Verify the available free disk space:

        df -h /
        
          
        Filesystem      Size  Used Avail Use% Mounted on
        /dev/sda         49G  2.0G   45G   5% /
        
        

        In this case there is 45GB of free disk space. LXD requires at least 15GB of space for the storage needs of containers. We will allocate 15GB of space for LXD, leaving 30GB of free space for the needs of the server.

      4. Run lxd init to initialize LXD:

        sudo lxd init
        

        You will be prompted several times during the initialization process. Choose the defaults for all options.

          
        Would you like to use LXD clustering? (yes/no) [default=no]:
        Do you want to configure a new storage pool? (yes/no) [default=yes]:
        Name of the new storage pool [default=default]:
        Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=zfs]:
        Create a new ZFS pool? (yes/no) [default=yes]:
        Would you like to use an existing block device? (yes/no) [default=no]:
        Size in GB of the new loop device (1GB minimum) [default=15GB]:
        Would you like to connect to a MAAS server? (yes/no) [default=no]:
        Would you like to create a new local network bridge? (yes/no) [default=yes]:
        What should the new bridge be called? [default=lxdbr0]:
        What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
        Would you like LXD to be available over the network? (yes/no) [default=no]:
        Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
        Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
        
        

      Apache Web Server with LXD

      This section will create a container, install the Apache web server, and add the appropriate iptables rules in order to expose post 80.

      1. Launch a new container:

        lxc launch ubuntu:18.04 web
        
      2. Update the package list in the container.

        lxc exec web -- apt update
        
      3. Install the Apache in the LXD container.

        lxc exec web -- apt install apache2
        
      4. Get a shell in the LXD container.

        lxc exec web -- sudo --user ubuntu --login
        
      5. Edit the default web page for Apache to make a reference that it runs inside a LXD container.

        sudo nano /var/www/html/index.html
        

        Change the line It works! (line number 224) to It works inside a LXD container!. Then, save and exit.

      6. Exit back to the host. We have made all the necessary changes to the container.

        exit
        
      7. Add a LXD proxy device to redirect connections from the internet to port 80 (HTTP) on the server to port 80 at this container.

        sudo lxc config device add web myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
        

        Note

        In recent versions of LXD, you need to specify an IP address (such as 127.0.0.1) instead of a hostname (such as localhost). If your container already has a proxy device that uses hostnames, you can edit the container configuration to replace with IP addresses by running lxc config edit web.

      8. From your local computer, navigate to your Linode’s public IP address in a web browser. You should see the default Apache page:

        Web page of Apache server running in a container

      Common LXD Commands

      • List all containers:

        lxc list
        
          
        To start your first container, try: lxc launch ubuntu:18.04
        
        +------+-------+------+------+------+-----------+
        | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
        +------+-------+------+------+------+-----------+
        
        
      • List all available repositories of container images:

        lxc remote list
        
          
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        |      NAME       |                   URL                    |   PROTOCOL    |  AUTH TYPE  | PUBLIC | STATIC |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | images          | https://images.linuxcontainers.org       | simplestreams | none        | YES    | NO     |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | local (default) | unix://                                  | lxd           | file access | NO     | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu          | https://cloud-images.ubuntu.com/releases | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        | ubuntu-daily    | https://cloud-images.ubuntu.com/daily    | simplestreams | none        | YES    | YES    |
        +-----------------+------------------------------------------+---------------+-------------+--------+--------+
        
        

        The repository ubuntu has container images of Ubuntu versions. The images repository has container images of a large number of different Linux distributions. The ubuntu-daily has daily container images to be used for testing purposes. The local repository is the LXD server that we have just installed. It is not public and can be used to store your own container images.

      • List all available container images from a repository:

        lxc image list ubuntu:
        
          
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        |      ALIAS       | FINGERPRINT  | PUBLIC |                  DESCRIPTION                  |  ARCH   |   SIZE   |          UPLOAD DATE          |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | b (11 more)      | 5b72cf46f628 | yes    | ubuntu 18.04 LTS amd64 (release) (20190424)   | x86_64  | 180.37MB | Apr 24, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | c (5 more)       | 4716703f04fc | yes    | ubuntu 18.10 amd64 (release) (20190402)       | x86_64  | 313.29MB | Apr 2, 2019 at 12:00am (UTC)  |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        | d (5 more)       | faef94acf5f9 | yes    | ubuntu 19.04 amd64 (release) (20190417)       | x86_64  | 322.56MB | Apr 17, 2019 at 12:00am (UTC) |
        +------------------+--------------+--------+-----------------------------------------------+---------+----------+-------------------------------+
        .....................................................................
        
        

        Note

        The first two columns for the alias and fingerprint provide an identifier that can be used to specify the container image when launching it.

        The output snippet shows the container images Ubuntu versions 18.04 LTS, 18.10, and 19.04. When creating a container we can just specify the short alias. For example, ubuntu:b means that the repository is ubuntu and the container image has the short alias b (for bionic, the codename of Ubuntu 18.04 LTS).

      • Get more information about a container image:

        lxc image info ubuntu:b
        
          
        Fingerprint: 5b72cf46f628b3d60f5d99af48633539b2916993c80fc5a2323d7d841f66afbe
        Size: 180.37MB
        Architecture: x86_64
        Public: yes
        Timestamps:
            Created: 2019/04/24 00:00 UTC
            Uploaded: 2019/04/24 00:00 UTC
            Expires: 2023/04/26 00:00 UTC
            Last used: never
        Properties:
            release: bionic
            version: 18.04
            architecture: amd64
            label: release
            serial: 20190424
            description: ubuntu 18.04 LTS amd64 (release) (20190424)
            os: ubuntu
        Aliases:
            - 18.04
            - 18.04/amd64
            - b
            - b/amd64
            - bionic
            - bionic/amd64
            - default
            - default/amd64
            - lts
            - lts/amd64
            - ubuntu
            - amd64
        Cached: no
        Auto update: disabled
        
        

        The output shows the details of the container image including all the available aliases. For Ubuntu 18.04 LTS, we can specify either b (for bionic, the codename of Ubuntu 18.04 LTS) or any other alias.

      • Launch a new container with the name mycontainer:

        lxc launch ubuntu:18.04 mycontainer
        
          
        Creating mycontainer
        Starting mycontainer
        
        
      • Check the list of containers to make sure the new container is running:

        lxc list
        
          
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        |    NAME     |  STATE  |         IPV4          |          IPV6             |    TYPE    | SNAPSHOTS |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        | mycontainer | RUNNING | 10.142.148.244 (eth0) | fde5:5d27:...:1371 (eth0) | PERSISTENT | 0         |
        +-------------+---------+-----------------------+---------------------------+------------+-----------+
        
        
      • Execute basic commands in mycontainer:

        lxc exec mycontainer -- apt update
        lxc exec mycontainer -- apt upgrade
        

        Note

        The characters -- instruct the lxc command not to parse any more command-line parameters.

      • Open a shell session within mycontainer:

        lxc exec mycontainer -- sudo --login --user ubuntu
        
          
        To run a command as administrator (user "root"), use "sudo ".
        See "man sudo_root" for details.
        
        ubuntu@mycontainer:~$
        
        

        Note

        The Ubuntu container images have by default a non-root account with username ubuntu. This account can use sudo and does not require a password to perform administrative tasks.

        The sudo command provides a login to the existing account ubuntu.

      • View the container logs:

        lxc info mycontainer --show-log
        
      • Stop the container:

        lxc stop mycontainer
        
      • Remove the container:

        lxc delete mycontainer
        

        Note

        A container needs to be stopped before it can be deleted.

      Troubleshooting

      Error “unix.socket: connect: connection refused”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused
      
      

      This happens when the LXD service is not currently running. By default, the LXD service is running as soon as it is configured successfully. See Initialize LXD to configure LXD.

      Error “unix.socket: connect: permission denied”

      When you run any lxc command, you get the following error:

          lxc list
      
      
        
      Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
      
      

      This happens when your limited user account is not a member of the lxd group, or you did not log out and log in again so that the new group membership to the lxd group gets updated.

      If your user account is ubuntu, the following command shows whether you are a member of the lxd group:

          groups ubuntu
      
      
        
      ubuntu : ubuntu sudo lxd
      
      

      In this example, we are members of the lxd group and we just need to log out and log in again. If you are not a member of the lxd group, see Initialize LXD on how to make your limited account a member of the lxd group.

      Next Steps

      If you plan to use a single website, then a single proxy device to the website container will suffice. If you plan to use multiple websites, you may install virtual hosts inside the website container. If instead you would like to setup multiple websites on their own container, then you will need to set up a reverse proxy in a container. In that case, the proxy device would direct to the reverse proxy container to direct the connections to the individual websites containers.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      A Beginner's Guide to Kubernetes


      Updated by Linode Contributed by Linode

      Kubernetes, often referred to as k8s, is an open source container orchestration system that helps deploy and manage containerized applications. Developed by Google starting in 2014 and written in the Go language, Kubernetes is quickly becoming the standard way to architect horizontally-scalable applications. This guide will explain the major parts and concepts of Kubernetes.

      Containers

      Kubernetes is a container orchestration tool and, therefore, needs a container runtime installed to work. In practice, the default container runtime for Kubernetes is Docker, though other runtimes like rkt, and LXD will also work. With the advent of the Container Runtime Interface (CRI), which hopes to standardize the way Kubernetes interacts with containers, other options like containerd, cri-o, and Frakti have also become available. This guide assumes you have a working knowledge of containers and the examples will all use Docker as the container runtime.

      Kubernetes API

      Kubernetes is built around a robust RESTful API. Every action taken in Kubernetes, be it inter-component communication or user command, interacts in some fashion with the Kubernetes API. The goal of the API is to help facilitate the desired state of the Kubernetes cluster. If you want X instances of your application running and have Y currently active, the API will take the required steps to get to X, whether this means creating, or destroying resources. To create this desired state, you create objects, which are normally represented by YAML files called manifests, and apply them through the command line with the kubectl tool.

      kubectl

      kubectl is a command line tool used to interact with the Kubernetes cluster. It offers a host of features, including the ability to create, stop, and delete resources, describe active resources, and auto scale resources. For more information on the types of commands and resources you can use with kubectl, consult the Kubernetes kubectl documentation.

      Kubernetes Master, Nodes, and Control Plane

      At the highest level of Kubernetes, there exist two kinds of servers, a Master and a Node. These servers can be Linodes, VMs, or physical servers. Together, these servers form a cluster.

      Nodes

      Kubernetes Nodes are worker servers that run your application. The number of Nodes is determined by the user, and they are created by the user. In addition to running your application, each Node runs two processes:

      • kubelet receives descriptions of the desired state of a Pod from the API server, and ensures the Pod is healthy, and running on the Node.
      • kube-proxy is a networking proxy that proxies the UDP, TCP, and SCTP networking of each Node, and provides load balancing. This is only used to connect to Services.

      Kubernetes Master

      The Kubernetes Master is normally a separate server responsible for maintaining the desired state of the cluster. It does this by telling the Nodes how many instances of your application it should run and where. The Kubernetes Master runs three processes:

      • kube-apiserver is the front end for the Kubernetes API server.
      • kube-controller-manager is a daemon that manages the Kubernetes control loop. For more on Controllers, see the Controllers section.
      • kube-scheduler is a function that looks for newly created Pods that have no Nodes, and assigns them a Node based on a host of requirements. For more information on kube-scheduler, consult the Kubernetes kube-scheduler documentation.

      Additionally, the Kubernetes Master runs the database etcd. Etcd is a highly available key-value store that provides the backend database for Kubernetes.

      Together, kube-apiserver, kube-controller-manager, kube-scheduler, and etcd form what is known as the control plane. The control plane is responsible for making decisions about the cluster, and pushing it toward the desired state.

      Kubernetes Objects

      In Kubernetes, there are a number of objects that are abstractions of your Kubernetes system’s desired state. These objects represent your application, its networking, and disk resources – all of which together form your application.

      Pods

      In Kubernetes, all containers exist within Pods. Pods are the smallest unit of the Kubernetes architecture, and can be viewed as a kind of wrapper for your container. Each Pod is given its own IP address with which it can interact with other Pods within the cluster.

      Usually, a Pod contains only one container, but a Pod can contain multiple containers if those containers need to share resources. If there is more than one container in a Pod, these containers can communicate with one another via localhost.

      Pods in Kubernetes are “mortal,” which means that they are created, and destroyed depending on the needs of the application. For instance, you might have a web app backend that sees a spike in CPU usage. This might cause the cluster to scale up the amount of backend Pods from two to ten, in which case eight new Pods would be created. Once the traffic subsides, the Pods might scale back to two, in which case eight pods would be destroyed.

      It is important to note that Pods are destroyed without respect to which Pod was created first. And, while each Pod has its own IP address, this IP address will only be available for the life-cycle of the Pod.

      Below is an example of a Pod manifest:

      my-apache-pod.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      
      apiVersion: v1
      kind: Pod
      metadata:
       name: apache-pod
       labels:
         app: web
      spec:
        containers:
        - name: apache-container
          image: httpd

      Each manifest has four necessary parts:

      • The version of the API in use
      • The kind of resource you’d like to define
      • Metadata about the resource
      • Though not required by all objects, a spec which describes the desired behavior of the resource is necessary for most objects and controllers.

      In the case of this example, the API in use is v1, and the kind is a Pod. The metadata field is used for applying a name, labels, and annotations. Names are used to differentiate resources, while labels are used to group like resources. Labels will come into play more when defining Services and Deployments. Annotations are for attaching arbitrary data to the resource.

      The spec is where the desired state of the resource is defined. In this case, a Pod with a single Apache container is desired, so the containers field is supplied with a name, ‘apache-container’, and an image, the latest version of Apache. The image is pulled from Docker Hub, as that is the default container registry for Kubernetes.

      For more information on the type of fields you can supply in a Pod manifest, refer to the Kubernetes Pod API documentation.

      Now that you have the manifest, you can create the Pod using the create command:

      kubectl create -f my-apache-pod.yaml
      

      To view a list of your pods, use the get pods command:

      kubectl get pods
      

      You should see output like the following:

      NAME         READY   STATUS    RESTARTS   AGE
      apache-pod   1/1     Running   0          16s
      

      To quickly view which Node the Pod exists on, issue the get pods command with the -o=wide flag:

      kubectl get pods -o=wide
      

      To retrieve information about the Pod, issue the describe command:

      kubcetl describe pod apache-pod
      

      You should see output like the following:

      ...
      Events:
      Type    Reason     Age    From                       Message
      ----    ------     ----   ----                       -------
      Normal  Scheduled  2m38s  default-scheduler          Successfully assigned default/apache-pod to mycluster-node-1
      Normal  Pulling    2m36s  kubelet, mycluster-node-1  pulling image "httpd"
      Normal  Pulled     2m23s  kubelet, mycluster-node-1  Successfully pulled image "httpd"
      Normal  Created    2m22s  kubelet, mycluster-node-1  Created container
      Normal  Started    2m22s  kubelet, mycluster-node-1  Started container
      

      To delete the Pod, issue the delete command:

      kubectl delete pod apache-pod
      

      Services

      Services group identical Pods together to provide a consistent means of accessing them. For instance, you might have three Pods that are all serving a website, and all of those Pods need to be accessible on port 80. A Service can ensure that all of the Pods are accessible at that port, and can load balance traffic between those Pods. Additionally, a Service can allow your application to be accessible from the internet. Each Service is given an IP address and a corresponding local DNS entry. Additionally, Services exist across Nodes. If you have two replica Pods on one Node and an additional replica Pod on another Node, the service can include all three Pods. There are four types of Service:

      • ClusterIP: Exposes the Service internally to the cluster. This is the default setting for a Service.
      • NodePort: Exposes the Service to the internet from the IP address of the Node at the specified port number. You can only use ports in the 30000-32767 range.
      • LoadBalancer: This will create a load balancer assigned to a fixed IP address in the cloud, so long as the cloud provider supports it. In the case of Linode, this is the responsibility of the Linode Cloud Controller Manager, which will create a NodeBalancer for the cluster. This is the best way to expose your cluster to the internet.
      • ExternalName: Maps the service to a DNS name by returning a CNAME record redirect. ExternalName is good for directing traffic to outside resources, such as a database that is hosted on another cloud.

      Below is an example of a Service manifest:

      my-apache-service.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      
      apiVersion: v1
      kind: Service
      metadata:
        name: apache-service
        labels:
          app: web
      spec:
        type: NodePort
        ports:
        - port: 80
          targetPort: 80
          nodePort: 30020
        selector:
          app: web

      The above example Service uses the v1 API, and its kind is Service. Like the Pod example in the previous section, this manifest has a name and a label. Unlike the Pod example, this spec uses the ports field to define the exposed port on the container (port), and the target port on the Pod (targetPort). The type NodePort unlocks the use of nodePort field, which allows traffic on the host Node at that port. Lastly, the selector field is used to target only the Pods that have been assigned the app: web label.

      For more information on Services, visit the Kubernetes Service API documentation.

      To create the Service from the YAML file, issue the create command:

      kubectl create -f my-apache-service.yaml
      

      To view a list of running services, issue the get services command:

      kubectl get services
      

      You should see output like the following:

      NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
      apache-service   NodePort    10.99.57.13   <none>        80:30020/TCP   54s
      kubernetes       ClusterIP   10.96.0.1     <none>        443/TCP        46h
      

      To retrieve more information about your Service, issue the describe command:

      kubectl describe service apache-service
      

      To delete the Service, issue the delete command:

      kubcetl delete service apache-service
      

      Volumes

      A Volume in Kubernetes is a way to share file storage between containers in a Pod. Kubernetes Volumes differ from Docker volumes because they exist inside the Pod rather than inside the container. When a container is restarted the Volume persists. Note, however, that these Volumes are still tied to the lifecycle of the Pod, so if the Pod is destroyed the Volume will be destroyed with it.

      Linode also offers a Container Storage Interface (CSI) driver that allows the cluster to persist data on a Block Storage volume.

      Below is an example of how to create and use a Volume by creating a Pod manifest:

      my-apache-pod-with-volume.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      apiVersion: v1
      kind: Pod
      metadata:
        name: apache-with-volume
      spec:
        volumes:
        - name: apache-storage-volume
          emptyDir: {}
      
        containers:
        - name: apache-container
          image: httpd
          volumeMounts:
          - name: apache-storage-volume
            mountPath: /data/apache-data

      A Volume has two unique aspects to its definition. In this example, the first aspect is the volumes block that defines the type of Volume you want to create, which in this case is a simple empty directory (emptyDir). The second aspect is the volumeMounts field within the container’s spec. This field is given the name of the Volume you are creating and a mount path within the container.

      There are a number of different Volume types you could create in addition to emptyDir depending on your cloud host. For more information on Volume types, visit the Kubernetes Volumes API documentation.

      Namespaces

      Namespaces are virtual clusters that exist within the Kubernetes cluster that help to group and organize objects. Every cluster has at least three namespaces: default, kube-system, and kube-public. When interacting with the cluster it is important to know which Namespace the object you are looking for is in, as many commands will default to only showing you what exists in the default namespace. Resources created without an explicit namespace will be added to the default namespace.

      Namespaces consist of alphanumeric characters, dashes (-), and periods (.).

      Here is an example of how to define a Namespace with a manifest:

      my-namespace.yaml
      1
      2
      3
      4
      
      apiVersion: v1
      kind: Namespace
      metadata:
        name: my-app

      To create the Namespace, issue the create command:

      kubcetl create -f my-namespace.yaml
      

      Below is an example of a Pod with a Namespace:

      my-apache-pod-with-namespace.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      
      apiVersion: v1
      kind: Pod
      metadata:
        name: apache-pod
        labels:
          app: web
        namespace: my-app
      spec:
        containers:
        - name: apache-container
          image: httpd

      To retrieve resources in a certain Namespace, use the -n flag.

      kubectl get pods -n my-app
      

      You should see a list of Pods within your namespace:

      NAME         READY   STATUS    RESTARTS   AGE
      apache-pod   1/1     Running   0          7s
      

      To view Pods in all Namespaces, use the --all-namespaces flag.

      kubectl get pods --all-namespaces
      

      To delete a Namespace, issue the delete namespace command. Note that this will delete all resources within that Namespace:

      kubectl delete namespace my-app
      

      For more information on Namespaces, visit the Kubernetes Namespaces API documentation

      Controllers

      A Controller is a control loop that continuously watches the Kubernetes API and tries to manage the desired state of certain aspects of the cluster. There are a number of controllers. Below is a short reference of the most popular controllers you might interact with.

      ReplicaSets

      As has been mentioned, Kubernetes allows an application to scale horizontally. A ReplicaSet is one of the controllers responsible for keeping a given number of replica Pods running. If one Pod goes down in a ReplicaSet, another will be created to replace it. In this way, Kubernetes is self-healing. However, for most use cases it is recommended to use a Deployment instead of a ReplicaSet.

      Below is an example of a ReplicaSet:

      my-apache-replicaset.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: apps/v1
      kind: ReplicaSet
      metadata:
        name: apache-replicaset
        labels:
          app: web
      spec:
        replicas: 5
        selector:
          matchLabels:
            app: web
        template:
          metadata:
            labels:
              app: web
          spec:
            containers:
            - name: apache-container
              image: httpd

      There are three main things to note in this ReplicaSet. The first is the apiVersion, which is apps/v1. This differs from the previous examples, which were all apiVersion: v1, because ReplicaSets do not exist in the v1 core. They instead reside in the apps group of v1. The second and third things to note are the replicas field and the selector field. The replicas field defines how many replica Pods you want to be running at any given time. The selector field defines which Pods, matched by their label, will be controlled by the ReplicaSet.

      To view your ReplicaSets, issue the get replicasets command:

      kubectl get replicasets
      

      You should see output like the following:

      NAME                DESIRED   CURRENT   READY   AGE
      apache-replicaset   5         5         0       5s
      

      This output shows that of the five desired replicas, there are 5 currently active, but zero of those replicas are available. This is because the Pods are still booting up. If you issue the command again, you will see that all five have become ready:

      NAME                DESIRED   CURRENT   READY   AGE
      apache-replicaset   5         5         5       86s
      

      You can view the Pods the ReplicaSet created by issuing the get pods command:

      NAME                      READY   STATUS    RESTARTS   AGE
      apache-replicaset-5rsx2   1/1     Running   0          31s
      apache-replicaset-8n52c   1/1     Running   0          31s
      apache-replicaset-jcgn8   1/1     Running   0          31s
      apache-replicaset-sj422   1/1     Running   0          31s
      apache-replicaset-z8g76   1/1     Running   0          31s
      

      To delete a ReplicaSet, issue the delete replicaset command:

      kubectl delete replicaset apache-replicaset
      

      If you issue the get pods command, you will see that the Pods the ReplicaSet created are in the process of terminating:

      NAME                      READY   STATUS        RESTARTS   AGE
      

      apache-replicaset-bm2pn 0/1 Terminating 0 3m54s

      In the above example, four of the Pods have already terminated, and one is in the process of terminating.

      For more information on ReplicaSets, view the Kubernetes ReplicaSets API documentation.

      Deployments

      A Deployment can manage a ReplicaSet, so it shares the ability to keep a defined number of replica pods up and running. A Deployment can also update those Pods to resemble the desired state by means of rolling updates. For example, if you wanted to update a container image to a newer version, you would create a Deployment, and the controller would update the container images one by one until the desired state is achieved. This ensures that there is no downtime when updating or altering your Pods.

      Below is an example of a Deployment:

      my-apache-deployment.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: apache-deployment
        labels:
          app: web
      spec:
        replicas: 5
        selector:
          matchLabels:
            app: web
        template:
          metadata:
            labels:
              app: web
          spec:
            containers:
            - name: apache-container
              image: httpd:2.4.35

      The only noticeable difference between this Deployment and the example given in the ReplicaSet section is the kind. In this example we have chosen to initially install Apache 2.4.35. If you wanted to update that image to Apache 2.4.38, you would issue the following command:

      kubectl --record deployment.apps/apache-deployment set image deployment.v1.apps/apache-deployment apache-container=httpd:2.4.38
      

      You’ll see a confirmation that the images have been updated:

      deployment.apps/apache-deployment image updated
      

      To see for yourself that the images have updated, you can grab the Pod name from the get pods list:

      kubectl get pods
      
      NAME                                 READY   STATUS    RESTARTS   AGE
      apache-deployment-574c8c4874-8zwgl   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-9pr5j   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-fbs46   1/1     Running   0          8m34s
      apache-deployment-574c8c4874-nn7dl   1/1     Running   0          8m36s
      apache-deployment-574c8c4874-pndgp   1/1     Running   0          8m33s
      

      Issue the describe command to view all of the available details of the Pod:

      kubectl describe pod apache-deployment-574c8c4874-pndgp
      

      You’ll see a long list of details, of which the container image is included:

      ....
      
      Containers:
        apache-container:
          Container ID:   docker://d7a65e7993ab5bae284f07f59c3ed422222100833b2769ff8ee14f9f384b7b94
          Image:          httpd:2.4.38
      
      ....
      

      For more information on Deployments, visit the Kubernetes Deployments API documentation

      Jobs

      A Job is a controller that manages a Pod that is created for a single, or set, of tasks. This is handy if you need to create a Pod that performs a single function, or calculates a value. The deletion of the Job will delete the Pod.

      Below is an example of a Job that simply prints “Hello World!” and ends:

      my-job.yaml
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      
      apiVersion: batch/v1
      kind: Job
      metadata:
        name: hello-world
      spec:
        template:
          metadata:
            name: hello-world
          spec:
            containers:
            - name: output
              image: debian
              command:
               - "bin/bash"
               - "-c"
               - "echo 'Hello World!'"
            restartPolicy: Never

      To create the Job, issue the create command:

      kubectl create -f my-job.yaml
      

      To see if the job has run, or is running, issue the get jobs command:

      kubectl get jobs
      

      You should see output like the following:

      NAME          COMPLETIONS   DURATION   AGE
      hello-world   1/1           9s         8m23s
      

      To get the Pod of the Job, issue the get pods command:

      kubectl get pods
      

      You should see an output like the following:

      NAME                               READY   STATUS             RESTARTS   AGE
      hello-world-4jzdm                  0/1     Completed          0          9m44s
      

      You can use the name of the Pod to inspect its output by consulting the log file for the Pod:

      kubectl get logs hello-world-4jzdm
      

      To delete the Job, and its Pod, issue the delete command:

      kubectl delete job hello-world
      

      Networking

      Networking in Kubernetes was designed to make it simple to port existing apps from VMs to containers, and subsequently, Pods. The basic requirements of the Kubernetes networking model are:

      1. Pods can communicate with each other across Nodes without the use of NAT
      2. Agents on a Node, like kubelet, can communicate with all of a Node’s Pods
      3. In the case of Linux, Pods in a Node’s host network can communicate to all other Pods without NAT.

      Though the rules of the Kubernetes networking model are simple, the implementation of those rules is an advanced topic. Because Kubernetes does not come with its own implementation, it is up to the user to provide a networking model.

      Two of the most popular options are Flannel and Calico. Flannel is a networking overlay that meets the functionality of the Kubernetes networking model by supplying a layer 3 network fabric, and is relatively easy to set up. Calico enables networking, and networking policy through the NetworkPolicy API to provide simple virtual networking.

      For more information on the Kubernetes networking model, and ways to implement it, consult the cluster networking documentation.

      Advanced Topics

      There are a number of advanced topics in Kubernetes. Below are a few you might find useful as you progress in Kubernetes:

      • StatefulSets can be used when creating stateful applications.
      • DaemonSets can be used to ensure each Node is running a certain Pod. This is useful for log collection, monitoring, and cluster storage.
      • Horizontal Pod Autoscaling can automatically scale your deployments based on CPU usage.
      • CronJobs can schedule Jobs to run at certain times.
      • ResourceQuotas are helpful when working with larger groups where there is a concern that some teams might take up too many resources.

      Next Steps

      Now that you are familiar with Kubernetes concepts and components, you can follow the Getting Started with Kubernetes: Use kubeadm to Deploy a Cluster on Linode guide. This guide provides a hands-on activity to continue learning about Kubernetes. If you would like to deploy a Kubernetes cluster on Linode for production use, we recommend using one of the following methods, instead:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      A Beginner's Guide to Terraform


      Updated by Linode Written by Linode

      Terraform by HashiCorp is an orchestration tool that allows you to represent your Linode instances and other resources with declarative code inside configuration files, instead of manually creating those resources via the Linode Manager or API. This practice is referred to as Infrastructure as Code, and Terraform is a popular example of this methodology. The basic workflow when using Terraform is:

      1. Write configuration files on your computer in which you declare the elements of your infrastructure that you want to create.

      2. Tell Terraform to analyze your configurations and then create the corresponding infrastructure.

      Terraform’s primary job is to create, modify, and destroy servers and other resources. Terraform generally does not configure your servers’ software. Configuring your software can be performed with scripts that you upload to and execute on your new servers, or via configuration management tools or container deployments.

      The Linode Provider

      Terraform is a general orchestration tool that can interface with a number of different cloud platforms. These integrations are referred to as providers. The Terraform provider for Linode was officially released in October 2018.

      Note

      The Linode provider can be used to create Linode instances, Images, domain records, Block Storage Volumes, StackScripts, and other resources. Terraform’s official Linode provider documentation details each resource that can be managed.

      Infrastructure as Code

      Terraform’s representation of your resources in configuration files is referred to as Infrastructure as Code (IAC). The benefits of this methodology and of using Terraform include:

      • Version control of your infrastructure. Because your resources are declared in code, you can track changes to that code over time in version control systems like Git.

      • Minimization of human error. Terraform’s analysis of your configuration files will produce the same results every time it creates your declared resources. As well, telling Terraform to repeatedly apply the same configuration will not result in extra resource creation, as Terraform tracks the changes it makes over time.

      • Better collaboration among team members. Terraform’s backends allow multiple team members to safely work on the same Terraform configuration simultaneously.

      HashiCorp Configuration Language

      Terraform’s configuration files can be written in either the HashiCorp Configuration Language (HCL), or in JSON. HCL is a configuration language authored by HashiCorp for use with its products, and it is designed to be human readable and machine friendly. It is recommended that you use HCL over JSON for your Terraform deployments.

      The next sections will illustrate core Terraform concepts with examples written in HCL. For a more complete review of HCL syntax, see Introduction to HashiCorp Configuration Language (HCL).

      Resources

      Here’s a simple example of a complete Terraform configuration in HCL:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      
      provider "linode" {
          token = "your-linode-api-token"
      }
      
      resource "linode_instance" "example_instance" {
          label = "example_instance_label"
          image = "linode/ubuntu18.04"
          region = "us-central"
          type = "g6-standard-1"
          authorized_keys = ["ssh-rsa AAAA...Gw== user@example.local"]
          root_pass = "your-root-password"
      }

      Note

      The SSH key in this example was truncated for brevity.

      This example Terraform file, with the Terraform file extension .tf, represents the creation of a single Linode instance labeled example_instance_label. This example file is prefixed with a mandatory provider block, which sets up the Linode provider and which you must list somewhere in your configuration.

      The provider block is followed by a resource declaration. Resource declarations correspond with the components of your Linode infrastructure: Linode instances, Block Storage Volumes, etc.

      Resources can accept arguments. region and type are required arguments for the linode_instance resource. A root password must be assigned to every Linode, but the root_pass Terraform argument is optional; if it is not specified, a random password will be generated.

      Note

      The example_instance string that follows the linode_instance resource type declaration is Terraform’s name for the resource. You cannot declare more than one Terraform resource with the same name and resource type.

      The label argument specifies the label for the Linode instance in the Linode Manager. This name is independent of Terraform’s name for the resource (though you can assign the same value to both). The Terraform name is only recorded in Terraform’s state and is not communicated to the Linode API. Labels for Linode instances in the same Linode account must be unique.

      Dependencies

      Terraform resources can depend on each other. When one resource depends on another, it will be created after the resource it depends on, even if it is listed before the other resource in your configuration file.

      The following snippet expands on the previous example. It declares a new domain with an A record that targets the Linode instance’s IP address:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      provider "linode" {
          # ...
      }
      
      resource "linode_instance" "example_instance" {
          # ...
      }
      
      resource "linode_domain" "example_domain" {
          domain = "example.com"
          soa_email = "example@example.com"
      }
      
      resource "linode_domain_record" "example_domain_record" {
          domain_id = "${linode_domain.example_domain.id}"
          name = "www"
          record_type = "A"
          target = "${linode_instance.example_instance.ip_address}"
      }

      The domain record’s domain_id and target arguments use HCL’s interpolation syntax to retrieve the ID of the domain resource and the IP of the Linode instance, respectively. Terraform creates an implicit dependency on the example_instance and example_domain resources for the example_domain_record resource. As a result, the domain record will not be created until after the Linode instance and the domain are created.

      Note

      Input Variables

      The previous example hard-coded sensitive data in your configuration, including your API token and root password. To avoid this practice, Terraform allows you to provide the values for your resource arguments in input variables. These variables are declared and referenced in your Terraform configuration (using interpolation syntax), and the values for those variables are assigned in a separate file.

      Input variables can also be used for non-sensitive data. The following example files will employ variables for the sensitive token and root_pass arguments and the non-sensitive authorized_keys and region arguments:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      provider "linode" {
          token = "${var.token}"
      }
      
      resource "linode_instance" "example_instance" {
          label = "example_instance_label"
          image = "linode/ubuntu18.04"
          region = "${var.region}"
          type = "g6-standard-1"
          authorized_keys = ["${var.ssh_key}"]
          root_pass = "${var.root_pass}"
      }
      
      variable "token" {}
      variable "root_pass" {}
      variable "ssh_key" {}
      variable "region" {
        default = "us-southeast"
      }
      terraform.tfvars
      1
      2
      3
      
      token = "your-linode-api-token"
      root_pass = "your-root-password"
      ssh_key = "ssh-rsa AAAA...Gw== user@example.local"

      Note

      Place all of your Terraform project’s files in the same directory. Terraform will automatically load input variable values from any file named terraform.tfvars or ending in .auto.tfvars.

      The region variable is not assigned a specific value, so it will use the default value provided in the variable’s declaration. See Introduction to HashiCorp Configuration Language for more detailed information about input variables.

      Terraform CLI

      You interact with Terraform via its command line interface. After you have created the configuration files in your Terraform project, you need to run the init command from the project’s directory:

      terraform init
      

      This command will download the Linode provider plugin and take other actions needed to initialize your project. It is safe to run this command more than once, but you generally will only need to run it again if you are adding another provider to your project.

      Plan and Apply

      After you have declared your resources in your configuration files, you create them by running Terraform’s apply command from your project’s directory. However, you should always verify that Terraform will create the resources as you expect them to be created before making any actual changes to your infrastructure. To do this, you can first run the plan command:

      terraform plan
      

      This command will generate a report detailing what actions Terraform will take to set up your Linode resources.

      If you are satisfied with this report, run apply:

      terraform apply
      

      This command will ask you to confirm that you want to proceed. When Terraform has finished applying your configuration, it will show a report of what actions were taken.

      State

      When Terraform analyzes and applies your configuration, it creates an internal representation of the infrastructure it created and uses it to track the changes made. This state information is recorded in JSON in a local file named terraform.tfstate by default, but it can also be stored in other backends.

      Caution

      Your sensitive infrastructure data (like passwords and tokens) is visible in plain-text in your terraform.tfstate file. Review Secrets Management with Terraform for guidance on how to secure these secrets.

      Other Commands

      Other useful commands are available, like terraform show, which reports a human-readable version of your Terraform state. A full list of Terraform commands is available in the official Terraform documentation.

      Provisioners

      In addition to resource declarations, Terraform configurations can include provisioners. You declare provisioners to run scripts and commands in your local development environment or on your Terraform-managed servers. These actions are performed when you apply your Terraform configuration.

      The following example uploads a setup script to a newly created Linode instance and then executes it. This pattern can be used to bootstrap the new instance or enroll it in configuration management:

      example.tf
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      
      resource "linode_instance" "example_instance" {
        # ...
      
        provisioner "file" {
            source      = "setup_script.sh"
            destination = "/tmp/setup_script.sh"
        }
      
        provisioner "remote-exec" {
          inline = [
            "chmod +x /tmp/setup_script.sh",
            "/tmp/setup_script.sh",
          ]
        }
      }

      Most provisioners are declared inside of a resource declaration. When multiple provisioners are declared inside a resource, they are executed in the order they are listed. For a full list of provisioners, review the official Terraform documentation.

      Note

      Linode StackScripts can also be used to set up a new Linode instance. A distinction between using StackScripts and the file and remote-exec provisioners is that those provisioners will run and complete synchronously before Terraform continues to apply your plan, while a StackScript will run in parallel while Terraform creates the rest of your remaining resources. As a result, Terraform might complete its application before a StackScript has finished running.

      Modules

      Terraform allows you to organize your configurations into reusable structures called modules. This is useful if you need to create multiple instances of the same cluster of servers. Review Create a Terraform Module for more information on authoring and using modules.

      Backends

      By default, Terraform maintains its state in your project’s directory. Terraform also supports storing your state in non-local backends. The benefits of including your state in another backend include:

      • Better collaboration with your team. Backends let you share the same state as other team members that have access to the backend.

      • Better security. The state information stored in and retrieved from backends is only kept in memory on your computer.

      • Remote operations. When working with a large infrastructure, terraform apply can take a long time to complete. Some backends allow you to run the apply remotely, instead of on your computer.

      The kinds of backends available are listed in Terraform’s official documentation.

      Importing

      It is possible to import Linode infrastructure that was created outside of Terraform into your Terraform plan. Review Import Existing Infrastructure to Terraform for instructions on this subject.

      Next Steps

      To get started with installing Terraform and creating your first projects, read through our Use Terraform to Provision Linode Environments guide.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link