One place for hosting & domains

      Docker

      Create and Deploy a Docker Container Image to a Kubernetes Cluster


      Updated by Linode

      Contributed by

      Linode

      Kubernetes and Docker

      Kubernetes is a system that automates the deployment, scaling, and management of containerized applications. Containerizing an application requires a base image that can be used to create an instance of a container. Once an application’s image exists, you can push it to a centralized container registry that Kubernetes can use to deploy container instances in a cluster’s pods.

      While Kubernetes supports several container runtimes, Docker is a very popular choice. Docker images are created using a Dockerfile that contains all commands, in their required order of execution, needed to build a given image. For example, a Dockerfile might contain instructions to install a specific operating system referencing another image, install an application’s dependencies, and execute configuration commands in the running container.

      Docker Hub is a centralized container image registry that can host your images and make them available for sharing and deployment. You can also find and use official Docker images and vendor specific images. When combined with a remote version control service, like GitHub, Docker Hub allows you to automate building container images and trigger actions for further automation with other services and tooling.

      Scope of This Guide

      This guide will show you how to package a Hugo static site in a Docker container image, host the image on Docker Hub, and deploy the container image on a Kubernetes cluster running on Linode. This example, is meant to demonstrate how applications can be containerized using Docker to leverage the deployment and scaling power of Kubernetes.

      Hugo is written in Go and is known for being extremely fast to compile sites, even very large ones. It is well-supported, well-documented, and has an active community. Some useful Hugo features include shortcodes, which are an easy way to include predefined templates inside of your Markdown, and built-in LiveReload web server, which allows you to preview your site changes locally as you make them.

      Note

      This guide was written using version 1.14 of Kubectl.

      Before You Begin

      1. Create a Kubernetes cluster with one worker node. This can be done in two ways:

        1. Deploy a Kubernetes cluster using kubeadm.
          • You will need to deploy two Linodes. One will serve as the master node and the other will serve as a worker node.
        2. Deploy a Kubernetes cluster using k8s-alpha CLI.
      2. Create a GitHub account if you don’t already have one.

      3. Create a Docker Hub account if you don’t already have one.

      Set up the Development Environment

      Development of your Hugo site and Docker image will take place locally on your personal computer. You will need to install Hugo, Docker CE, and Git, a version control software, on your personal computer to get started.

      1. Use the How to Install Git on Linux, Mac or Windows guide for the steps needed to install Git.

      2. Install Hugo. Hugo’s official documentation contains more information on installation methods, like Installing Hugo from Tarball. Below are installation instructions for common operating systems:

        • Debian/Ubuntu:

          sudo apt-get install hugo
          
        • Fedora, Red Hat and CentOS:

          sudo dnf install hugo
          
        • Mac, using Homebrew:

          brew install hugo
          
      3. These steps install Docker Community Edition (CE) using the official Ubuntu repositories. To install on another distribution, see the official installation page.

        1. Remove any older installations of Docker that may be on your system:

          sudo apt remove docker docker-engine docker.io
          
        2. Make sure you have the necessary packages to allow the use of Docker’s repository:

          sudo apt install apt-transport-https ca-certificates curl software-properties-common
          
        3. Add Docker’s GPG key:

          curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
          
        4. Verify the fingerprint of the GPG key:

          sudo apt-key fingerprint 0EBFCD88
          

          You should see output similar to the following:

            
          pub   4096R/0EBFCD88 2017-02-22
                  Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
          uid                  Docker Release (CE deb) 
          sub   4096R/F273FCD8 2017-02-22
          
          
        5. Add the stable Docker repository:

          sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
          

          Note

          For Ubuntu 19.04 if you get an E: Package 'docker-ce' has no installation candidate error this is because the stable version of docker for is not yet available. Therefore, you will need to use the edge / test repository.

          sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable edge test"
          
        6. Update your package index and install Docker CE:

          sudo apt update
          sudo apt install docker-ce
          
        7. Add your limited Linux user account to the docker group:

          sudo usermod -aG docker $USER
          

          Note

          After entering the usermod command, you will need to close your SSH session and open a new one for this change to take effect.

        8. Check that the installation was successful by running the built-in “Hello World” program:

          docker run hello-world
          

      Create a Hugo Site

      Initialize the Hugo Site

      In this section you will use the Hugo CLI (command line interface) to create your Hugo site and initialize a Hugo theme. Hugo’s CLI provides several useful commands for common tasks needed to build, configure, and interact with your Hugo site.

      1. Create a new Hugo site on your local computer. This command will create a folder named example-site and scaffold Hugo’s directory structure inside it:

        hugo new site example-site
        
      2. Move into your Hugo site’s root directory:

        cd example-site
        
      3. You will use Git to add a theme to your Hugo site’s directory. Initialize your Hugo site’s directory as a Git repository:

        git init
        
      4. Install the Ananke theme as a submodule of your Hugo site’s Git repository. Git submodules allow one Git repository to be stored as a subdirectory of another Git repository, while still being able to maintain each repository’s version control information separately. The Ananke theme’s repository will be located in the ~/example-site/themes/ananke directory of your Hugo site.

        git submodule add https://github.com/budparr/gohugo-theme-ananke.git themes/ananke
        

        Note

        Hugo has many available themes that can be installed as a submodule of your Hugo site’s directory.
      5. Add the theme to your Hugo site’s configuration file. The configuration file (config.toml) is located at the root of your Hugo site’s directory.

        echo 'theme = "ananke"' >> config.toml
        

      Add Content to the Hugo Site

      You can now begin to add content to your Hugo site. In this section you will add a new post to your Hugo site and generate the corresponding static file by building the Hugo site on your local computer.

      1. Create a new content file for your site. This command will generate a Markdown file with an auto-populated date and title:

        hugo new posts/my-first-post.md
        
      2. You should see a similar output. Note that the file is located in the content/posts/ directory of your Hugo site:

          
        /home/username/example-site/content/posts/my-first-post.md created
            
        
      3. Open the Markdown file in the text editor of your choice to begin modifying its content; you can copy and paste the example snippet into your file, which contains an updated front matter section at the top and some example Markdown body text.

        Set your desired value for title. Then, set the draft state to false and add your content below the --- in Markdown syntax, if desired:

        /home/username/example-site/content/posts/my-first-post.md
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        
        ---
        title: "My First Post"
        date: 2019-05-07T11:25:11-04:00
        draft: false
        ---
        
        # Kubernetes Objects
        
        In Kubernetes, there are a number of objects that are abstractions of your Kubernetes system’s desired state. These objects represent your application, its networking, and disk resources – all of which together form your application. Kubernetes objects can describe:
        
        - Which containerized applications are running on the cluster
        - Application resources
        - Policies that should be applied to the application


        About front matter

        Front matter is a collection of metadata about your content, and it is embedded at the top of your file within opening and closing --- delimiters.

        Front matter is a powerful Hugo feature that provides a mechanism for passing data that is attached to a specific piece of content to Hugo’s rendering engine. Hugo accepts front matter in TOML, YAML, and JSON formats. In the example snippet, there is YAML front matter for the title, date, and draft state of the Markdown file. These variables will be referenced and displayed by your Hugo theme.

      4. Once you have added your content, you can preview your changes by building and serving the site using Hugo’s built-in webserver:

        hugo server
        
      5. You will see a similar output:

          
        &nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| EN
        +------------------+----+
          Pages&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp| 11
          Paginator pages&nbsp&nbsp&nbsp&nbsp|  0
          Non-page files&nbsp&nbsp&nbsp&nbsp&nbsp|  0
          Static files&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  3
          Processed images&nbsp&nbsp&nbsp|  0
          Aliases&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  1
          Sitemaps&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  1
          Cleaned&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp&nbsp|  0
        
        Total in 7 ms
        Watching for changes in /home/username/example-site/{content,data,layouts,static,themes}
        Watching for config changes in /home/username/example-site/config.toml
        Serving pages from memory
        Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
        Web Server is available at http://localhost:1313/ (bind address 127.0.0.1)
        Press Ctrl+C to stop
        
        
      6. The output will provide a URL to preview your site. Copy and paste the URL into a browser to access the site. In the above example Hugo’s web server URL is http://localhost:1313/.

      7. When you are happy with your site’s content you can build the site:

        hugo -v
        

        Hugo will generate your site’s static HTML files and store them in a public directory that it will create inside your project. The static files that are generated by Hugo are the files that will be served to the internet through your Kubernetes cluster.

      8. View the contents of your site’s public directory:

        ls public
        

        Your output should resemble the following example. When you built the site, the Markdown file you created and edited in steps 6 and 7 was used to generate its corresponding static HTML file in the public/posts/my-first-post/index.html directory.

          
          404.html    categories  dist        images      index.html  index.xml   posts       sitemap.xml tags
            
        

      Version Control the Site with Git

      The example Hugo site was initialized as a local Git repository in the previous section. You can now version control all content, theme, and configuration files with Git. Once you have used Git to track your local Hugo site files, you can easily push them to a remote Git repository, like GitHub or GitLab. Storing your Hugo site files on a remote Git repository opens up many possibilities for collaboration and automating Docker image builds. This guide will not cover automated builds, but you can learn more about it on Docker’s official documentation.

      1. Add a .gitignore file to your Git repository. Any files or directories added to the .gitignore file will not be tracked by Git. The Docker image you will create in the next section will handle building your static site files. For this reason it is not necessary to track the public directory and its content.

        echo 'public/' >> .gitignore
        
      2. Display the state of your current working directory (root of your Hugo site):

        git status
        
      3. Stage all your files to be committed:

        git add -A
        
      4. Commit all your changes and add a meaningful commit message:

        git commit -m 'Add content, theme, and config files.'
        

        Note

        Any time you complete work related to one logical change to the Hugo site, you should make sure you commit the changes to your Git repository. Keeping your commits attached to small changes makes it easier to understand the changes and to roll back to previous commits, if necessary. See the Getting Started with Git guide for more information.

      Create a Docker Image

      Create the Dockerfile

      A Dockerfile contains the steps needed to build a Docker image. The Docker image provides the minimum set up and configuration necessary to deploy a container that satisfies its specific use case. The Hugo site’s minimum Docker container configuration requirements are an operating system, Hugo, the Hugo site’s content files, and the NGINX web server.

      1. In your Hugo site’s root directory, create and open a file named Dockerfile using the text editor of your choice. Add the following content to the file. You can read the Dockerfile comments to learn what each command will execute in the Docker container.

        Dockerfile
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        
        #Install the container's OS.
        FROM ubuntu:latest as HUGOINSTALL
        
        # Install Hugo.
        RUN apt-get update
        RUN apt-get install hugo
        
        # Copy the contents of the current working directory to the hugo-site
        # directory. The directory will be created if it doesn't exist.
        COPY . /hugo-site
        
        # Use Hugo to build the static site files.
        RUN hugo -v --source=/hugo-site --destination=/hugo-site/public
        
        # Install NGINX and deactivate NGINX's default index.html file.
        # Move the static site files to NGINX's html directory.
        # This directory is where the static site files will be served from by NGINX.
        FROM nginx:stable-alpine
        RUN mv /usr/share/nginx/html/index.html /usr/share/nginx/html/old-index.html
        COPY --from=HUGOINSTALL /hugo-site/public/ /usr/share/nginx/html/
        
        # The container will listen on port 80 using the TCP protocol.
        EXPOSE 80
            
      2. Add a .dockerignore file to your Hugo repository. It is important to ensure that your images are as small as possible to reduce the time it takes to build, pull, push, and deploy the container. The .dockerignore file excludes files and directories that are not necessary for the function of your container or that may contain sensitive information that you do not want to included in the image. Since the Docker image will build the static Hugo site files, you can ignore the public/ directory. You can also exclude any Git related files and directories because they are not needed on the running container.

        echo -e "public/n.git/n.gitmodules/n.gitignore" >> .dockerignore
        
      3. Follow the steps 2 – 4 in the Version Control the Site with Git section to add any new files created in this section to your local git repository.

      Build the Docker Image

      You are now ready to build the Docker image. When Docker builds an image it incorporates the build context. A build context includes any files and directories located in the current working directory. By default, Docker assumes the current working directory is also the location of the Dockerfile.

      Note

      If you have not yet created a Docker Hub account, you will need to do so before proceeding with this section.
      1. Build the Docker image and add a tag mydockerhubusername/hugo-site:v1 to the image. Ensure you are in the root directory of your Hugo site. The tag will make it easy to reference a specific image version when creating your Kubernetes deployment manifest. Replace mydockerhubusername with your Docker Hub username and hugo-site with a Docker repository name you prefer.

        docker build -t mydockerhubusername/hugo-site:v1 .
        

        You should see a similar output. The entirety of the output has been removed for brevity:

          
        Sending build context to Docker daemon  3.307MB
        Step 1/10 : FROM ubuntu:latest as HUGOINSTALL
         ---> 94e814e2efa8
        Step 2/10 : ENV HUGO_VERSION=0.55.4
         ---> Using cache
         ---> e651df397e32
         ...
        
        Successfully built 50c590837916
        Successfully tagged hugo-k8s:v1
            
        
      2. View all locally available Docker images:

        docker images
        

        You should see the docker image hugo-site:v1 listed in the output:

          
        REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
        hugo-k8s            v1                  50c590837916        1 day ago          16.5MB
            
        


      Push your Hugo Site Repository to GitHub

      You can push your local Hugo site’s Git repository to GitHub in order to set up Docker automated builds. Docker automated builds will build an image using a external repository as the build context and automatically push the image to your Docker Hub repository. This step is not necessary to complete this guide.

      Host your Image on Docker Hub

      Hosting your Hugo site’s image on Docker Hub will enable you to use the image in a Kubernetes cluster deployment. You will also be able to share the image with collaborators and the rest of the Docker community.

      1. Log into your Docker Hub account via the command line on your local computer. Enter your username and password when prompted.

        docker login
        
      2. Push the local Docker image to Docker Hub. Replace mydockerhubusername/hugo-site:v1 with your image’s tag name.

        docker push mydockerhubusername/hugo-site:v1
        
      3. Navigate to Docker Hub to view your image on your account.

        The url for your image repository should be similar to the following: https://cloud.docker.com/repository/docker/mydockerhubusername/hugo-site. Replace the username and repository name with your own.

      Configure your Kubernetes Cluster

      This section will use kubectl to configure and manage your Kubernetes cluster. If your cluster was deployed using kubeadm, you will need to log into your master node to execute the kubectl commands in this section. If, instead, you used the k8s-alpha CLI you can run all commands from your local computer.

      In this section, you will create namespace, deployment, and service manifest files for your Hugo site deployment and apply them to your cluster with kubectl. Each manifest file creates different resources on the Kubernetes API that are used to create and the Hugo site’s pods on the worker nodes.

      Create the Namespace

      Namespaces provide a powerful way to logically partition your Kubernetes cluster and isolate components and resources to avoid collisions across the cluster. A common use-case is to encapsulate dev/testing/production environments with namespaces so that they can each utilize the same resource names across each stage of development.

      Namespaces add a layer of complexity to a cluster that may not always be necessary. It is important to keep this in mind when formulating the architecture for a project’s application. This example will create a namespace for demonstration purposes, but it is not a requirement. One situation where a namespace would be beneficial, in the context of this guide, would be if you were a developer and wanted to manage Hugo sites for several clients with a single Kubernetes cluster.

      1. Create a directory to store your Hugo site’s manifest files.

        mkdir -p clientx/k8s-hugo/
        
      2. Create the manifest file for your Hugo site’s namespace with the following content:

        clientx/k8s-hugo/ns-hugo-site.yaml
        1
        2
        3
        4
        5
        
        apiVersion: v1
        kind: Namespace
        metadata:
          name: hugo-site
              
        • The manifest file declares the version of the API in use, the kind of resource that is being defined, and metadata about the resource. All manifest files should provide this information.
        • The key-value pair name: hugo-site defines the namespace object’s unique name.
      3. Create the namespace from the ns-hugo-site.yaml manifest.

        kubectl create -f clientx/k8s-hugo/ns-hugo-site.yaml
        
      4. View all available namespaces in your cluster:

        kubectl get namespaces
        

        You should see the hugo-site namespace listed in the output:

          
        NAME          STATUS   AGE
        default       Active   1d
        hugo-site     Active   1d
        kube-public   Active   1d
        kube-system   Active   1d
            
        

      Create the Service

      The service will group together all pods for the Hugo site, expose the same port on all pods to the internet, and load balance site traffic between all pods. It is best to create a service prior to any controllers (like a deployment) so that the Kubernetes scheduler can distribute the pods for the service as they are created by the controller.

      The Hugo site’s service manifest file will use the NodePort method to get external traffic to the Hugo site service. NodePort opens a specific port on all the Nodes and any traffic that is sent to this port is forwarded to the service. Kubernetes will choose the port to open on the nodes if you do not provide one in your service manifest file. It is recommended to let Kubernetes handle the assignment. Kubernetes will choose a port in the default range, 30000-32767.

      Note

      The k8s-alpha CLI creates clusters that are pre-configured with useful Linode service integrations, like the Linode Cloud Controller Manager (CCM) which provides access to Linode’s load balancer service, NodeBalancers. In order to use Linode’s NodeBalancers you can use the LoadBalancer service type instead of NodePort in your Hugo site’s service manifest file. For more details, see the Kubernetes Cloud Controller Manager for Linode GitHub repository.
      1. Create the manifest file for your service with the following content.

        clientx/k8s-hugo/service-hugo.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        
        apiVersion: v1
        kind: Service
        metadata:
          name: : hugo-site
          namespace: hugo-site
        spec:
          selector:
            app: hugo-site
          ports:
          - protocol: TCP
            port: 80
            targetPort: 80
          type: NodePort
            
        • The spec key defines the Hugo site service object’s desired behavior. It will create a service that exposes TCP port 80 on any pod with the app: hugo-site label.
        • The exposed container port is defined by the targetPort:80 key-value pair.
      2. Create the service for your hugo site:

        kubectl create -f clientx/k8s-hugo/service-hugo.yaml
        
      3. View the service and its corresponding information:

        kubectl get services -n hugo-site
        

        Your output will resemble the following:

          
        NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
        hugo-site   NodePort   10.108.110.6           80:30304/TCP   1d
            
        

      Create the Deployment

      A deployment is a controller that helps manage the state of your pods. The Hugo site deployment will define how many pods should be kept up and running with the Hugo site service and which container image should be used.

      1. Create the manifest file for your Hugo site’s deployment. Copy the following contents to your file.

        clientx/k8s-hugo/deployment.yaml
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        
        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: hugo-site
          namespace: hugo-site
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: hugo-site
          template:
            metadata:
              labels:
                app: hugo-site
            spec:
              containers:
              - name: hugo-site
                image: mydockerhubusername/hugo-site:v1
                imagePullPolicy: Always
                ports:
                - containerPort: 80
              
        • The deployment’s object spec states that the deployment should have 3 replica pods. This means at any given time the cluster will have 3 pods that run the Hugo site service.
        • The template field provides all the information needed to create actual pods.
        • The label app: hugo-site helps the deployment know which service pods to target.
        • The container field states that any containers connected to this deployment should use the Hugo site image mydockerhubusername/hugo-site:v1 that was created in the Build the Docker Image section of this guide.
        • imagePullPolicy: Always means that the container image will be pulled every time the pod is started.
        • containerPort: 80 states the port number to expose on the pod’s IP address. The system does not rely on this field to expose the container port, instead, it provides information about the network connections a container uses.
      2. Create the deployment for your hugo site:

        kubectl create -f clientx/k8s-hugo/deployment.yaml
        
      3. View the Hugo site’s deployment:

        kubectl get deployment hugo-site -n hugo-site
        

        Your output will resemble the following:

          
        NAME        READY   UP-TO-DATE   AVAILABLE   AGE
        hugo-site   3/3     3            3           1d
            
        

      View the Hugo Site

      After creating all required manifest files to configure your Hugo site’s Kubernetes cluster, you should be able to view the site using a worker node’s IP address and its exposed port.

      1. Get your worker node’s external IP address. Copy down the EXTERNAL-IP value for any worker node in the cluster:

        kubectl get nodes -o wide
        
      2. Access the hugo-site services to view its exposed port.

        kubectl get svc -n hugo-site
        

        The output will resemble the following. Copy down the listed port number in the 30000-32767 range.

          
        NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
        hugo-site   NodePort   10.108.110.6           80:30304/TCP   1d
            
        
      3. Open a browser window and enter in a worker node’s IP address and exposed port. An example url to your Hugo site would be, http://192.0.2.1:30304. Your Hugo site should appear.

        If desired, you can purchase a domain name and use Linode’s DNS Manager to assign a domain name to the cluster’s worker node IP address.

      Tear Down Your Cluster

      To avoid being further billed for your Kubernetes cluster, tear down your cluster’s Linodes. If you have Linodes that existed for only part a monthly billing cycle, you’ll be billed at the hourly rate for that service. See How Hourly Billing Works to learn more.

      Next Steps

      Now that you are familiar with basic Kubernetes concepts, like configuring pods, grouping resources, and deploying services, you can deploy a Kubernetes cluster on Linode for production use by using the steps in the following guides:

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Install and Use Docker on Debian 10


      Introduction

      Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system.

      For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.

      In this tutorial, you’ll install and use Docker Community Edition (CE) on Debian 10. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.

      Prerequisites

      To follow this tutorial, you will need the following:

      Step 1 — Installing Docker

      The Docker installation package available in the official Debian repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

      First, update your existing list of packages:

      Next, install a few prerequisite packages which let apt use packages over HTTPS:

      • sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common

      Then add the GPG key for the official Docker repository to your system:

      • curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

      Add the Docker repository to APT sources:

      • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"

      Next, update the package database with the Docker packages from the newly added repo:

      Make sure you are about to install from the Docker repo instead of the default Debian repo:

      • apt-cache policy docker-ce

      You'll see output like this, although the version number for Docker may be different:

      Output of apt-cache policy docker-ce

      ocker-ce:
        Installed: (none)
        Candidate: 5:18.09.7~3-0~debian-buster
        Version table:
           5:18.09.7~3-0~debian-buster 500
              500 https://download.docker.com/linux/debian buster/stable amd64 Packages
      

      Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Debian 10 (buster).

      Finally, install Docker:

      • sudo apt install docker-ce

      Docker is now installed, the daemon started, and the process enabled to start on boot. Check that it's running:

      • sudo systemctl status docker

      The output will be similar to the following, showing that the service is active and running:

      Output

      ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2019-07-08 15:11:19 UTC; 58s ago Docs: https://docs.docker.com Main PID: 5709 (dockerd) Tasks: 8 Memory: 31.6M CGroup: /system.slice/docker.service └─5709 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

      Installing Docker gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We'll explore how to use the docker command later in this tutorial.

      Step 2 — Executing the Docker Command Without Sudo (Optional)

      By default, the docker command can only be run the root user or by a user in the docker group, which is automatically created during Docker's installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, you'll get an output like this:

      Output

      docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

      If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

      • sudo usermod -aG docker ${USER}

      To apply the new group membership, log out of the server and back in, or type the following:

      You will be prompted to enter your user's password to continue.

      Confirm that your user is now added to the docker group by typing:

      Output

      sammy sudo docker

      If you need to add a user to the docker group that you're not logged in as, declare that username explicitly using:

      • sudo usermod -aG docker username

      The rest of this article assumes you are running the docker command as a user in the docker group. If you choose not to, please prepend the commands with sudo.

      Let's explore the docker command next.

      Step 3 — Using the Docker Command

      Using docker consists of passing it a chain of options and commands followed by arguments. The syntax takes this form:

      • docker [option] [command] [arguments]

      To view all available subcommands, type:

      As of Docker 18, the complete list of available subcommands includes:

      Output

      attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes

      To view the options available to a specific command, type:

      • docker docker-subcommand --help

      To view system-wide information about Docker, use:

      Let's explore some of these commands. We'll start by working with images.

      Step 4 — Working with Docker Images

      Docker containers are built from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anyone can host their Docker images on Docker Hub, so most applications and Linux distributions you'll need will have images hosted there.

      To check whether you can access and download images from Docker Hub, type:

      The output will indicate that Docker in working correctly:

      Output

      Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:41a65640635299bab090f783209c1e3a3f11934cf7756b09cb2f1e02147c6ed8 Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...

      Docker was initially unable to find the hello-world image locally, so it downloaded the image from Docker Hub, which is the default repository. Once the image downloaded, Docker created a container from the image and the application within the container executed, displaying the message.

      You can search for images available on Docker Hub by using the docker command with the search subcommand. For example, to search for the Ubuntu image, type:

      The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:

      Output

      NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 9704 [OK] dorowu/ubuntu-desktop-lxde-vnc Docker image to provide HTML5 VNC interface … 319 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 224 [OK] consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 183 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 99 [OK] ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 97 [OK] neurodebian NeuroDebian provides neuroscience research s… 57 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 50 [OK] ubuntu ...

      In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you've identified the image that you would like to use, you can download it to your computer using the pull subcommand.

      Execute the following command to download the official ubuntu image to your computer:

      You'll see the following output:

      Output

      Using default tag: latest latest: Pulling from library/ubuntu 5b7339215d1d: Pull complete 14ca88e9f672: Pull complete a31c3b1caad4: Pull complete b054a26005b7: Pull complete Digest: sha256:9b1702dcfe32c873a770a32cfd306dd7fc1c4fd134adfb783db68defc8894b3c Status: Downloaded newer image for ubuntu:latest

      After an image has been downloaded, you can then run a container using the downloaded image with the run subcommand. As you saw with the hello-world example, if an image has not been downloaded when docker is executed with the run subcommand, the Docker client will first download the image, then run a container using it.

      To see the images that have been downloaded to your computer, type:

      The output should look similar to the following:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 4c108a37151f 2 weeks ago 64.2MB hello-world latest fce289e99eb9 6 months ago 1.84kB

      As you'll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.

      Let's look at how to run containers in more detail.

      Step 5 — Running a Docker Container

      The hello-world container you ran in the previous step is an example of a container that runs and exits after emitting a test message. Containers can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.

      As an example, let's run a container using the latest image of Ubuntu. The combination of the -i and -t switches gives you interactive shell access into the container:

      Your command prompt should change to reflect the fact that you're now working inside the container and should take this form:

      Output

      root@d9b100f2f636:/#

      Note the container id in the command prompt. In this example, it is d9b100f2f636. You'll need that container ID later to identify the container when you want to remove it.

      Now you can run any command inside the container. For example, let's update the package database inside the container. You don't need to prefix any command with sudo, because you're operating inside the container as the root user:

      Then install any application in it. Let's install Node.js:

      This installs Node.js in the container from the official Ubuntu repository. When the installation finishes, verify that Node.js is installed:

      You'll see the version number displayed in your terminal:

      Output

      v8.10.0

      Any changes you make inside the container only apply to that container.

      To exit the container, type exit at the prompt.

      Let's look at managing the containers on our system next.

      Step 6 — Managing Docker Containers

      After using Docker for a while, you'll have many active (running) and inactive containers on your computer. To view the active ones, use:

      You will see output similar to the following:

      Output

      CONTAINER ID IMAGE COMMAND CREATED

      In this tutorial, you started two containers; one from the hello-world image and another from the ubuntu image. Both containers are no longer running, but they still exist on your system.

      To view all containers — active and inactive, run docker ps with the -a switch:

      You'll see output similar to this:

      CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS                      PORTS               NAMES
      d42d0bbfbd35        ubuntu              "/bin/bash"         About a minute ago   Exited (0) 20 seconds ago                       friendly_volhard
      0740844d024c        hello-world         "/hello"            3 minutes ago        Exited (0) 3 minutes ago                        elegant_neumann
      

      To view the latest container you created, pass it the -l switch:

      • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      • d42d0bbfbd35 ubuntu "/bin/bash" About a minute ago Exited (0) 34 seconds ago friendly_volhard

      To start a stopped container, use docker start, followed by the container ID or the container's name. Let's start the Ubuntu-based container with the ID of d9b100f2f636:

      • docker start d42d0bbfbd35

      The container will start, and you can use docker ps to see its status:

      CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
      d42d0bbfbd35        ubuntu              "/bin/bash"         About a minute ago   Up 8 seconds                            friendly_volhard
      
      

      To stop a running container, use docker stop, followed by the container ID or name. This time, we'll use the name that Docker assigned the container, which is friendly_volhard:

      • docker stop friendly_volhard

      Once you've decided you no longer need a container anymore, remove it with the docker rm command, again using either the container ID or the name. Use the docker ps -a command to find the container ID or name for the container associated with the hello-world image and remove it.

      • docker rm elegant_neumann

      You can start a new container and give it a name using the --name switch. You can also use the --rm switch to create a container that removes itself when it's stopped. See the docker run help command for more information on these options and others.

      Containers can be turned into images which you can use to build new containers. Let's look at how that works.

      Step 7 — Committing Changes in a Container to a Docker Image

      When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm command, the changes will be lost for good.

      This section shows you how to save the state of a container as a new Docker image.

      After installing Node.js inside the Ubuntu container, you now have a container running off an image, but the container is different from the image you used to create it. But you might want to reuse this Node.js container as the basis for new images later.

      Then commit the changes to a new Docker image instance using the following command.

      • docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name

      The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container_id is the one you noted earlier in the tutorial when you started the interactive Docker session. Unless you created additional repositories on Docker Hub, the repository is usually your Docker Hub username.

      For example, for the user sammy, with the container ID of d9b100f2f636, the command would be:

      • docker commit -m "added Node.js" -a "sammy" d42d0bbfbd35 sammy/ubuntu-nodejs

      When you commit an image, the new image is saved locally on your computer. Later in this tutorial, you'll learn how to push an image to a Docker registry like Docker Hub so others can access it.

      Listing the Docker images again will show the new image, as well as the old one that it was derived from:

      You'll see output like this:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE sammy/ubuntu-nodejs latest d441c62350b4 10 seconds ago 152MB ubuntu latest 4c108a37151f 2 weeks ago 64.2MB hello-world latest fce289e99eb9 6 months ago 1.84kB

      In this example, ubuntu-nodejs is the new image, which was derived from the existing ubuntu image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that NodeJS was installed. So next time you need to run a container using Ubuntu with NodeJS pre-installed, you can just use the new image.

      You can also build Images from a Dockerfile, which lets you automate the installation of software in a new image. However, that's outside the scope of this tutorial.

      Now let's share the new image with others so they can create containers from it.

      Step 8 — Pushing Docker Images to a Docker Repository

      The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.

      This section shows you how to push a Docker image to Docker Hub. To learn how to create your own private Docker registry, check out How To Set Up a Private Docker Registry on Ubuntu 14.04.

      To push your image, first log into Docker Hub.

      • docker login -u docker-registry-username

      You'll be prompted to authenticate using your Docker Hub password. If you specified the correct password, authentication should succeed.

      Note: If your Docker registry username is different from the local username you used to create the image, you will have to tag your image with your registry username. For the example given in the last step, you would type:

      • docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs

      Then you may push your own image using:

      • docker push docker-registry-username/docker-image-name

      To push the ubuntu-nodejs image to the sammy repository, the command would be:

      • docker push sammy/ubuntu-nodejs

      The process may take some time to complete as it uploads the images, but when completed, the output will look like this:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

      After pushing an image to a registry, it should be listed on your account's dashboard, like that show in the image below.

      New Docker image listing on Docker Hub

      If a push attempt results in an error of this sort, then you likely did not log in:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

      Log in with docker login and repeat the push attempt. Then verify that it exists on your Docker Hub repository page.

      You can now use docker pull sammy/ubuntu-nodejs to pull the image to a new machine and use it to run a new container.

      Conclusion

      In this tutorial you installed Docker, worked with images and containers, and pushed a modified image to Docker Hub. Now that you know the basics, explore the other Docker tutorials in the DigitalOcean Community.



      Source link

      How to Use a Remote Docker Server to Speed Up Your Workflow


      Introduction

      Building CPU-intensive images and binaries is a very slow and time-consuming process that can turn your laptop into a space heater at times. Pushing Docker images on a slow connection takes a long time, too. Luckily, there’s an easy fix for these issues. Docker lets you offload all those tasks to a remote server so your local machine doesn’t have to do that hard work.

      This feature was introduced in Docker 18.09. It brings support for connecting to a Docker host remotely via SSH. It requires very little configuration on the client, and only needs a regular Docker server without any special config running on a remote machine. Prior to Docker 18.09, you had to use Docker Machine to create a remote Docker server and then configure the local Docker environment to use it. This new method removes that additional complexity.

      In this tutorial, you’ll create a Droplet to host the remote Docker server and configure the docker command on your local machine to use it.

      Prerequisites

      To follow this tutorial, you’ll need:

      • A DigitalOcean account. You can create an account if you don’t have one already.
      • Docker installed on your local machine or development server. If you are working with Ubuntu 18.04, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04; otherwise, follow the official documentation for information about installing on other operating systems. Be sure to add your non-root user to the docker group, as described in Step 2 of the linked tutorial.

      Step 1 – Creating the Docker Host

      To get started, spin up a Droplet with a decent amount of processing power. The CPU Optimized plans are perfect for this purpose, but Standard ones work just as well. If you will be compiling resource-intensive programs, the CPU Optimized plans provide dedicated CPU cores which allow for faster builds. Otherwise, the Standard plans offer a more balanced CPU to RAM ratio.

      The Docker One-click image takes care of all of the setup for us. Follow this link to create a 16GB/8vCPU CPU-Optimized Droplet with Docker from the control panel.

      Alternatively, you can use doctl to create the Droplet from your local command line. To install it, follow the instructions in the doctl README file on GitHub.

      The following command creates a new 16GB/8vCPU CPU-Optimized Droplet in the FRA1 region based on the Docker One-click image:

      • doctl compute droplet create docker-host
      • --image docker-18-04
      • --region fra1
      • --size c-8
      • --wait
      • --ssh-keys $(doctl compute ssh-key list --format ID --no-header | sed 's/$/,/' | tr -d 'n' | sed 's/,$//')

      The doctl command uses the ssh-keys value to specify which SSH keys it should apply to your new Droplet. We use a subshell to call doctl compute ssh-key-list to retrieve the SSH keys associated with your DigitalOcean account, and then parse the results using the sed and tr commands to format the data in the correct format. This command includes all of your account’s SSH keys, but you can replace the highlighted subcommand with the fingerprint of any key you have in your account.

      Once the Droplet is created you’ll see its IP address among other details:

      Output

      ID Name Public IPv4 Private IPv4 Public IPv6 Memory VCPUs Disk Region Image Status Tags Features Volumes 148681562 docker-host your_server_ip 16384 8 100 fra1 Ubuntu Docker 5:18.09.6~3 on 18.04 active

      You can learn more about using the doctl command in the tutorial How To Use doctl, the Official DigitalOcean Command-Line Client.

      When the Droplet is created, you’ll have a ready to use Docker server. For security purposes, create a Linux user to use instead of root.

      First, connect to the Droplet with SSH as the root user:

      Once connected, add a new user. This command adds one named sammy:

      Then add the user to the docker group to give it permission to run commands on the Docker host.

      • sudo usermod -aG docker sammy

      Finally, exit from the remote server by typing exit.

      Now that the server is ready, let's configure the local docker command to use it.

      Step 2 – Configuring Docker to Use the Remote Host

      To use the remote host as your Docker host instead of your local machine, set the DOCKER_HOST environment variable to point to the remote host. This variable will instruct the Docker CLI client to connect to the remote server.

      • export DOCKER_HOST=ssh://sammy@your_server_ip

      Now any Docker command you run will be run on the Droplet. For example, if you start a web server container and expose a port, it will be run on the Droplet and will be accessible through the port you exposed on the Droplet's IP address.

      To verify that you're accessing the Droplet as the Docker host, run docker info.

      You will see your Droplet's hostname listed in the Name field like so:

      Output

      … Name: docker-host

      One thing to keep in mind is that when you run a docker build command, the build context (all files and folders accessible from the Dockerfile) will be sent to the host and then the build process will run. Depending on the size of the build context and the amount of files, it may take a longer time compared to building the image on a local machine. One solution would be to create a new directory dedicated to the Docker image and copy or link only the files that will be used in the image so that no unneeded files will be uploaded inadvertently.

      Conclusion

      You've created a remote Docker host and connected to it locally. The next time your laptop's battery is running low or you need to build a heavy Docker image, use your shiny remote Docker server instead of your local machine.

      You might also be interested in learning how to optimize Docker images for production, or how to optimize them specifically for Kubernetes.



      Source link